bibtex_url
stringlengths
41
53
acl_proceedings
stringlengths
38
50
bibtext
stringlengths
528
3.02k
abstract
stringlengths
17
2.35k
authors
sequencelengths
1
44
title
stringlengths
18
190
id
stringlengths
7
19
arxiv_id
stringlengths
10
10
GitHub
sequencelengths
1
1
paper_page
stringclasses
528 values
n_linked_authors
int64
-1
15
upvotes
int64
-1
77
num_comments
int64
-1
10
n_authors
int64
-1
52
Models
sequencelengths
0
100
Datasets
sequencelengths
0
15
Spaces
sequencelengths
0
46
paper_page_exists_pre_conf
int64
0
1
type
stringclasses
2 values
https://aclanthology.org/2023.emnlp-main.1.bib
https://aclanthology.org/2023.emnlp-main.1/
@inproceedings{zhang-etal-2023-iag, title = "{IAG}: Induction-Augmented Generation Framework for Answering Reasoning Questions", author = "Zhang, Zhebin and Zhang, Xinyu and Ren, Yuanhang and Shi, Saijiang and Han, Meng and Wu, Yongkang and Lai, Ruofei and Cao, Zhao", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.emnlp-main.1", doi = "10.18653/v1/2023.emnlp-main.1", pages = "1--14", abstract = "Retrieval-Augmented Generation (RAG), by incorporating external knowledge with parametric memory of language models, has become the state-of-the-art architecture for open-domain QA tasks. However, common knowledge bases are inherently constrained by limited coverage and noisy information, making retrieval-based approaches inadequate to answer implicit reasoning questions. In this paper, we propose an Induction-Augmented Generation (IAG) framework that utilizes inductive knowledge along with the retrieved documents for implicit reasoning. We leverage large language models (LLMs) for deriving such knowledge via a novel prompting method based on inductive reasoning patterns. On top of this, we implement two versions of IAG named IAG-GPT and IAG-Student, respectively. IAG-GPT directly utilizes the knowledge generated by GPT-3 for answer prediction, while IAG-Student gets rid of dependencies on GPT service at inference time by incorporating a student inductor model. The inductor is firstly trained via knowledge distillation and further optimized by back-propagating the generator feedback via differentiable beam scores. Experimental results show that IAG outperforms RAG baselines as well as ChatGPT on two Open-Domain QA tasks. Notably, our best models have won the first place in the official leaderboards of CSQA2.0 (since Nov 1, 2022) and StrategyQA (since Jan 8, 2023).", }
Retrieval-Augmented Generation (RAG), by incorporating external knowledge with parametric memory of language models, has become the state-of-the-art architecture for open-domain QA tasks. However, common knowledge bases are inherently constrained by limited coverage and noisy information, making retrieval-based approaches inadequate to answer implicit reasoning questions. In this paper, we propose an Induction-Augmented Generation (IAG) framework that utilizes inductive knowledge along with the retrieved documents for implicit reasoning. We leverage large language models (LLMs) for deriving such knowledge via a novel prompting method based on inductive reasoning patterns. On top of this, we implement two versions of IAG named IAG-GPT and IAG-Student, respectively. IAG-GPT directly utilizes the knowledge generated by GPT-3 for answer prediction, while IAG-Student gets rid of dependencies on GPT service at inference time by incorporating a student inductor model. The inductor is firstly trained via knowledge distillation and further optimized by back-propagating the generator feedback via differentiable beam scores. Experimental results show that IAG outperforms RAG baselines as well as ChatGPT on two Open-Domain QA tasks. Notably, our best models have won the first place in the official leaderboards of CSQA2.0 (since Nov 1, 2022) and StrategyQA (since Jan 8, 2023).
[ "Zhang, Zhebin", "Zhang, Xinyu", "Ren, Yuanhang", "Shi, Saijiang", "Han, Meng", "Wu, Yongkang", "Lai, Ruofei", "Cao, Zhao" ]
IAG: Induction-Augmented Generation Framework for Answering Reasoning Questions
emnlp-main.1
2311.18397
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.emnlp-main.2.bib
https://aclanthology.org/2023.emnlp-main.2/
@inproceedings{yamamoto-matsuzaki-2023-absolute, title = "Absolute Position Embedding Learns Sinusoid-like Waves for Attention Based on Relative Position", author = "Yamamoto, Yuji and Matsuzaki, Takuya", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.emnlp-main.2", doi = "10.18653/v1/2023.emnlp-main.2", pages = "15--28", abstract = "Attention weight is a clue to interpret how a Transformer-based model makes an inference. In some attention heads, the attention focuses on the neighbors of each token. This allows the output vector of each token to depend on the surrounding tokens and contributes to make the inference context-dependent. We analyze the mechanism behind the concentration of attention on nearby tokens. We show that the phenomenon emerges as follows: (1) learned position embedding has sinusoid-like components, (2) such components are transmitted to the query and the key in the self-attention, (3) the attention head shifts the phases of the sinusoid-like components so that the attention concentrates on nearby tokens at specific relative positions. In other words, a certain type of Transformer-based model acquires the sinusoidal positional encoding to some extent on its own through Masked Language Modeling.", }
Attention weight is a clue to interpret how a Transformer-based model makes an inference. In some attention heads, the attention focuses on the neighbors of each token. This allows the output vector of each token to depend on the surrounding tokens and contributes to make the inference context-dependent. We analyze the mechanism behind the concentration of attention on nearby tokens. We show that the phenomenon emerges as follows: (1) learned position embedding has sinusoid-like components, (2) such components are transmitted to the query and the key in the self-attention, (3) the attention head shifts the phases of the sinusoid-like components so that the attention concentrates on nearby tokens at specific relative positions. In other words, a certain type of Transformer-based model acquires the sinusoidal positional encoding to some extent on its own through Masked Language Modeling.
[ "Yamamoto, Yuji", "Matsuzaki, Takuya" ]
Absolute Position Embedding Learns Sinusoid-like Waves for Attention Based on Relative Position
emnlp-main.2
null
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Oral
https://aclanthology.org/2023.emnlp-main.3.bib
https://aclanthology.org/2023.emnlp-main.3/
@inproceedings{qiang-etal-2023-chinese, title = "{C}hinese Lexical Substitution: Dataset and Method", author = "Qiang, Jipeng and Liu, Kang and Li, Ying and Li, Yun and Zhu, Yi and Yuan, Yun-Hao and Hu, Xiaocheng and Ouyang, Xiaoye", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.emnlp-main.3", doi = "10.18653/v1/2023.emnlp-main.3", pages = "29--42", abstract = "Existing lexical substitution (LS) benchmarks were collected by asking human annotators to think of substitutes from memory, resulting in benchmarks with limited coverage and relatively small scales. To overcome this problem, we propose a novel annotation method to construct an LS dataset based on human and machine collaboration. Based on our annotation method, we construct the first Chinese LS dataset CHNLS which consists of 33,695 instances and 144,708 substitutes, covering three text genres (News, Novel, and Wikipedia). Specifically, we first combine four unsupervised LS methods as an ensemble method to generate the candidate substitutes, and then let human annotators judge these candidates or add new ones. This collaborative process combines the diversity of machine-generated substitutes with the expertise of human annotators. Experimental results that the ensemble method outperforms other LS methods. To our best knowledge, this is the first study for the Chinese LS task.", }
Existing lexical substitution (LS) benchmarks were collected by asking human annotators to think of substitutes from memory, resulting in benchmarks with limited coverage and relatively small scales. To overcome this problem, we propose a novel annotation method to construct an LS dataset based on human and machine collaboration. Based on our annotation method, we construct the first Chinese LS dataset CHNLS which consists of 33,695 instances and 144,708 substitutes, covering three text genres (News, Novel, and Wikipedia). Specifically, we first combine four unsupervised LS methods as an ensemble method to generate the candidate substitutes, and then let human annotators judge these candidates or add new ones. This collaborative process combines the diversity of machine-generated substitutes with the expertise of human annotators. Experimental results that the ensemble method outperforms other LS methods. To our best knowledge, this is the first study for the Chinese LS task.
[ "Qiang, Jipeng", "Liu, Kang", "Li, Ying", "Li, Yun", "Zhu, Yi", "Yuan, Yun-Hao", "Hu, Xiaocheng", "Ouyang, Xiaoye" ]
Chinese Lexical Substitution: Dataset and Method
emnlp-main.3
null
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.emnlp-main.4.bib
https://aclanthology.org/2023.emnlp-main.4/
@inproceedings{sun-etal-2023-decoding, title = "Decoding the Silent Majority: Inducing Belief Augmented Social Graph with Large Language Model for Response Forecasting", author = "Sun, Chenkai and Li, Jinning and Fung, Yi and Chan, Hou and Abdelzaher, Tarek and Zhai, ChengXiang and Ji, Heng", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.emnlp-main.4", doi = "10.18653/v1/2023.emnlp-main.4", pages = "43--57", abstract = "Automatic response forecasting for news media plays a crucial role in enabling content producers to efficiently predict the impact of news releases and prevent unexpected negative outcomes such as social conflict and moral injury. To effectively forecast responses, it is essential to develop measures that leverage the social dynamics and contextual information surrounding individuals, especially in cases where explicit profiles or historical actions of the users are limited (referred to as lurkers). As shown in a previous study, 97{\%} of all tweets are produced by only the most active 25{\%} of users. However, existing approaches have limited exploration of how to best process and utilize these important features. To address this gap, we propose a novel framework, named SocialSense, that leverages a large language model to induce a belief-centered graph on top of an existent social network, along with graph-based propagation to capture social dynamics. We hypothesize that the induced graph that bridges the gap between distant users who share similar beliefs allows the model to effectively capture the response patterns. Our method surpasses existing state-of-the-art in experimental evaluations for both zero-shot and supervised settings, demonstrating its effectiveness in response forecasting. Moreover, the analysis reveals the framework{'}s capability to effectively handle unseen user and lurker scenarios, further highlighting its robustness and practical applicability.", }
Automatic response forecasting for news media plays a crucial role in enabling content producers to efficiently predict the impact of news releases and prevent unexpected negative outcomes such as social conflict and moral injury. To effectively forecast responses, it is essential to develop measures that leverage the social dynamics and contextual information surrounding individuals, especially in cases where explicit profiles or historical actions of the users are limited (referred to as lurkers). As shown in a previous study, 97{\%} of all tweets are produced by only the most active 25{\%} of users. However, existing approaches have limited exploration of how to best process and utilize these important features. To address this gap, we propose a novel framework, named SocialSense, that leverages a large language model to induce a belief-centered graph on top of an existent social network, along with graph-based propagation to capture social dynamics. We hypothesize that the induced graph that bridges the gap between distant users who share similar beliefs allows the model to effectively capture the response patterns. Our method surpasses existing state-of-the-art in experimental evaluations for both zero-shot and supervised settings, demonstrating its effectiveness in response forecasting. Moreover, the analysis reveals the framework{'}s capability to effectively handle unseen user and lurker scenarios, further highlighting its robustness and practical applicability.
[ "Sun, Chenkai", "Li, Jinning", "Fung, Yi", "Chan, Hou", "Abdelzaher, Tarek", "Zhai, ChengXiang", "Ji, Heng" ]
Decoding the Silent Majority: Inducing Belief Augmented Social Graph with Large Language Model for Response Forecasting
emnlp-main.4
2310.13297
[ "https://github.com/chenkaisun/socialsense" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.emnlp-main.5.bib
https://aclanthology.org/2023.emnlp-main.5/
@inproceedings{yao-etal-2023-fine, title = "Fine-grained Conversational Decoding via Isotropic and Proximal Search", author = "Yao, Yuxuan and Wu, Han and Xu, Qiling and Song, Linqi", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.emnlp-main.5", doi = "10.18653/v1/2023.emnlp-main.5", pages = "58--70", abstract = "General-purpose text decoding approaches are usually adopted for dialogue response generation. Although the quality of the generated responses can be improved with dialogue-specific encoding methods, conversational decoding methods are still under-explored. Inspired by SimDRC that a good dialogue feature space should follow the rules of locality and isotropy, we present a fine-grained conversational decoding method, termed isotropic and proximal search (IPS). Our method is designed to generate the semantic-concentrated response, while still maintaining informativeness and discrimination against the context. Experiments show that our approach significantly outperforms existing decoding strategies in the dialogue field across both automatic and human evaluation metrics. More in-depth analyses further confirm the effectiveness of our approach.", }
General-purpose text decoding approaches are usually adopted for dialogue response generation. Although the quality of the generated responses can be improved with dialogue-specific encoding methods, conversational decoding methods are still under-explored. Inspired by SimDRC that a good dialogue feature space should follow the rules of locality and isotropy, we present a fine-grained conversational decoding method, termed isotropic and proximal search (IPS). Our method is designed to generate the semantic-concentrated response, while still maintaining informativeness and discrimination against the context. Experiments show that our approach significantly outperforms existing decoding strategies in the dialogue field across both automatic and human evaluation metrics. More in-depth analyses further confirm the effectiveness of our approach.
[ "Yao, Yuxuan", "Wu, Han", "Xu, Qiling", "Song, Linqi" ]
Fine-grained Conversational Decoding via Isotropic and Proximal Search
emnlp-main.5
2310.08130
[ "https://github.com/starrYYxuan/IPS" ]
https://huggingface.co./papers/2310.08130
0
0
0
4
[]
[]
[]
1
Poster
https://aclanthology.org/2023.emnlp-main.6.bib
https://aclanthology.org/2023.emnlp-main.6/
@inproceedings{stefanovitch-piskorski-2023-holistic, title = "Holistic Inter-Annotator Agreement and Corpus Coherence Estimation in a Large-scale Multilingual Annotation Campaign", author = "Stefanovitch, Nicolas and Piskorski, Jakub", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.emnlp-main.6", doi = "10.18653/v1/2023.emnlp-main.6", pages = "71--86", abstract = "In this paper we report on the complexity of persuasion technique annotation in the context of a large multilingual annotation campaign involving 6 languages and approximately 40 annotators. We highlight the techniques that appear to be difficult for humans to annotate and elaborate on our findings on the causes of this phenomenon. We introduce Holistic IAA, a new word embedding-based annotator agreement metric and we report on various experiments using this metric and its correlation with the traditional Inter Annotator Agreement (IAA) metrics. However, given somewhat limited and loose interaction between annotators, i.e., only a few annotators annotate the same document subsets, we try to devise a way to assess the coherence of the entire dataset and strive to find a good proxy for IAA between annotators tasked to annotate different documents and in different languages, for which classical IAA metrics can not be applied.", }
In this paper we report on the complexity of persuasion technique annotation in the context of a large multilingual annotation campaign involving 6 languages and approximately 40 annotators. We highlight the techniques that appear to be difficult for humans to annotate and elaborate on our findings on the causes of this phenomenon. We introduce Holistic IAA, a new word embedding-based annotator agreement metric and we report on various experiments using this metric and its correlation with the traditional Inter Annotator Agreement (IAA) metrics. However, given somewhat limited and loose interaction between annotators, i.e., only a few annotators annotate the same document subsets, we try to devise a way to assess the coherence of the entire dataset and strive to find a good proxy for IAA between annotators tasked to annotate different documents and in different languages, for which classical IAA metrics can not be applied.
[ "Stefanovitch, Nicolas", "Piskorski, Jakub" ]
Holistic Inter-Annotator Agreement and Corpus Coherence Estimation in a Large-scale Multilingual Annotation Campaign
emnlp-main.6
null
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.emnlp-main.7.bib
https://aclanthology.org/2023.emnlp-main.7/
@inproceedings{borenstein-etal-2023-phd, title = "{PHD}: Pixel-Based Language Modeling of Historical Documents", author = "Borenstein, Nadav and Rust, Phillip and Elliott, Desmond and Augenstein, Isabelle", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.emnlp-main.7", doi = "10.18653/v1/2023.emnlp-main.7", pages = "87--107", abstract = "The digitisation of historical documents has provided historians with unprecedented research opportunities. Yet, the conventional approach to analysing historical documents involves converting them from images to text using OCR, a process that overlooks the potential benefits of treating them as images and introduces high levels of noise. To bridge this gap, we take advantage of recent advancements in pixel-based language models trained to reconstruct masked patches of pixels instead of predicting token distributions. Due to the scarcity of real historical scans, we propose a novel method for generating synthetic scans to resemble real historical documents. We then pre-train our model, PHD, on a combination of synthetic scans and real historical newspapers from the 1700-1900 period. Through our experiments, we demonstrate that PHD exhibits high proficiency in reconstructing masked image patches and provide evidence of our model{'}s noteworthy language understanding capabilities. Notably, we successfully apply our model to a historical QA task, highlighting its usefulness in this domain.", }
The digitisation of historical documents has provided historians with unprecedented research opportunities. Yet, the conventional approach to analysing historical documents involves converting them from images to text using OCR, a process that overlooks the potential benefits of treating them as images and introduces high levels of noise. To bridge this gap, we take advantage of recent advancements in pixel-based language models trained to reconstruct masked patches of pixels instead of predicting token distributions. Due to the scarcity of real historical scans, we propose a novel method for generating synthetic scans to resemble real historical documents. We then pre-train our model, PHD, on a combination of synthetic scans and real historical newspapers from the 1700-1900 period. Through our experiments, we demonstrate that PHD exhibits high proficiency in reconstructing masked image patches and provide evidence of our model{'}s noteworthy language understanding capabilities. Notably, we successfully apply our model to a historical QA task, highlighting its usefulness in this domain.
[ "Borenstein, Nadav", "Rust, Phillip", "Elliott, Desmond", "Augenstein, Isabelle" ]
PHD: Pixel-Based Language Modeling of Historical Documents
emnlp-main.7
2310.18343
[ "https://github.com/nadavborenstein/pixel-bw" ]
https://huggingface.co./papers/2310.18343
1
1
0
4
[]
[]
[]
1
Poster
https://aclanthology.org/2023.emnlp-main.8.bib
https://aclanthology.org/2023.emnlp-main.8/
@inproceedings{wang-etal-2023-primacy, title = "Primacy Effect of {C}hat{GPT}", author = "Wang, Yiwei and Cai, Yujun and Chen, Muhao and Liang, Yuxuan and Hooi, Bryan", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.emnlp-main.8", doi = "10.18653/v1/2023.emnlp-main.8", pages = "108--115", abstract = "Instruction-tuned large language models (LLMs), such as ChatGPT, have led to promising zero-shot performance in discriminative natural language understanding (NLU) tasks. This involves querying the LLM using a prompt containing the question, and the candidate labels to choose from. The question-answering capabilities of ChatGPT arise from its pre-training on large amounts of human-written text, as well as its subsequent fine-tuning on human preferences, which motivates us to ask: Does ChatGPT also inherit humans{'} cognitive biases? In this paper, we study the primacy effect of ChatGPT: the tendency of selecting the labels at earlier positions as the answer. We have two main findings: i) ChatGPT{'}s decision is sensitive to the order of labels in the prompt; ii) ChatGPT has a clearly higher chance to select the labels at earlier positions as the answer. We hope that our experiments and analyses provide additional insights into building more reliable ChatGPT-based solutions. We release the source code at https://github.com/wangywUST/PrimacyEffectGPT.", }
Instruction-tuned large language models (LLMs), such as ChatGPT, have led to promising zero-shot performance in discriminative natural language understanding (NLU) tasks. This involves querying the LLM using a prompt containing the question, and the candidate labels to choose from. The question-answering capabilities of ChatGPT arise from its pre-training on large amounts of human-written text, as well as its subsequent fine-tuning on human preferences, which motivates us to ask: Does ChatGPT also inherit humans{'} cognitive biases? In this paper, we study the primacy effect of ChatGPT: the tendency of selecting the labels at earlier positions as the answer. We have two main findings: i) ChatGPT{'}s decision is sensitive to the order of labels in the prompt; ii) ChatGPT has a clearly higher chance to select the labels at earlier positions as the answer. We hope that our experiments and analyses provide additional insights into building more reliable ChatGPT-based solutions. We release the source code at https://github.com/wangywUST/PrimacyEffectGPT.
[ "Wang, Yiwei", "Cai, Yujun", "Chen, Muhao", "Liang, Yuxuan", "Hooi, Bryan" ]
Primacy Effect of ChatGPT
emnlp-main.8
2310.13206
[ "https://github.com/wangywust/primacyeffectgpt" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.emnlp-main.9.bib
https://aclanthology.org/2023.emnlp-main.9/
@inproceedings{kawabata-sugawara-2023-evaluating, title = "Evaluating the Rationale Understanding of Critical Reasoning in Logical Reading Comprehension", author = "Kawabata, Akira and Sugawara, Saku", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.emnlp-main.9", doi = "10.18653/v1/2023.emnlp-main.9", pages = "116--143", abstract = "To precisely evaluate a language model{'}s capability for logical reading comprehension, we present a dataset for testing the understanding of the rationale behind critical reasoning. For questions taken from an existing multiple-choice logical reading comprehension dataset, we crowdsource rationale texts that explain why we should select or eliminate answer options, resulting in 3,003 multiple-choice subquestions that are associated with 943 main questions. Experiments on our dataset show that recent large language models (e.g., InstructGPT) struggle to answer the subquestions even if they are able to answer the main questions correctly. We find that the models perform particularly poorly in answering subquestions written for the incorrect options of the main questions, implying that the models have a limited capability for explaining why incorrect alternatives should be eliminated. These results suggest that our dataset encourages further investigation into the critical reasoning ability of language models while focusing on the elimination process of relevant alternatives.", }
To precisely evaluate a language model{'}s capability for logical reading comprehension, we present a dataset for testing the understanding of the rationale behind critical reasoning. For questions taken from an existing multiple-choice logical reading comprehension dataset, we crowdsource rationale texts that explain why we should select or eliminate answer options, resulting in 3,003 multiple-choice subquestions that are associated with 943 main questions. Experiments on our dataset show that recent large language models (e.g., InstructGPT) struggle to answer the subquestions even if they are able to answer the main questions correctly. We find that the models perform particularly poorly in answering subquestions written for the incorrect options of the main questions, implying that the models have a limited capability for explaining why incorrect alternatives should be eliminated. These results suggest that our dataset encourages further investigation into the critical reasoning ability of language models while focusing on the elimination process of relevant alternatives.
[ "Kawabata, Akira", "Sugawara, Saku" ]
Evaluating the Rationale Understanding of Critical Reasoning in Logical Reading Comprehension
emnlp-main.9
2311.18353
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.emnlp-main.10.bib
https://aclanthology.org/2023.emnlp-main.10/
@inproceedings{muller-etal-2023-evaluating, title = "Evaluating and Modeling Attribution for Cross-Lingual Question Answering", author = "Muller, Benjamin and Wieting, John and Clark, Jonathan and Kwiatkowski, Tom and Ruder, Sebastian and Soares, Livio and Aharoni, Roee and Herzig, Jonathan and Wang, Xinyi", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.emnlp-main.10", doi = "10.18653/v1/2023.emnlp-main.10", pages = "144--157", abstract = "Trustworthy answer content is abundant in many high-resource languages and is instantly accessible through question answering systems {---} yet this content can be hard to access for those that do not speak these languages. The leap forward in cross-lingual modeling quality offered by generative language models offers much promise, yet their raw generations often fall short in factuality. To improve trustworthiness in these systems, a promising direction is to attribute the answer to a retrieved source, possibly in a content-rich language different from the query. Our work is the first to study attribution for cross-lingual question answering. First, we collect data in 5 languages to assess the attribution level of a state-of-the-art cross-lingual QA system. To our surprise, we find that a substantial portion of the answers is not attributable to any retrieved passages (up to 50{\%} of answers exactly matching a gold reference) despite the system being able to attend directly to the retrieved text. Second, to address this poor attribution level, we experiment with a wide range of attribution detection techniques. We find that Natural Language Inference models and PaLM 2 fine-tuned on a very small amount of attribution data can accurately detect attribution. With these models, we improve the attribution level of a cross-lingual QA system. Overall, we show that current academic generative cross-lingual QA systems have substantial shortcomings in attribution and we build tooling to mitigate these issues.", }
Trustworthy answer content is abundant in many high-resource languages and is instantly accessible through question answering systems {---} yet this content can be hard to access for those that do not speak these languages. The leap forward in cross-lingual modeling quality offered by generative language models offers much promise, yet their raw generations often fall short in factuality. To improve trustworthiness in these systems, a promising direction is to attribute the answer to a retrieved source, possibly in a content-rich language different from the query. Our work is the first to study attribution for cross-lingual question answering. First, we collect data in 5 languages to assess the attribution level of a state-of-the-art cross-lingual QA system. To our surprise, we find that a substantial portion of the answers is not attributable to any retrieved passages (up to 50{\%} of answers exactly matching a gold reference) despite the system being able to attend directly to the retrieved text. Second, to address this poor attribution level, we experiment with a wide range of attribution detection techniques. We find that Natural Language Inference models and PaLM 2 fine-tuned on a very small amount of attribution data can accurately detect attribution. With these models, we improve the attribution level of a cross-lingual QA system. Overall, we show that current academic generative cross-lingual QA systems have substantial shortcomings in attribution and we build tooling to mitigate these issues.
[ "Muller, Benjamin", "Wieting, John", "Clark, Jonathan", "Kwiatkowski, Tom", "Ruder, Sebastian", "Soares, Livio", "Aharoni, Roee", "Herzig, Jonathan", "Wang, Xinyi" ]
Evaluating and Modeling Attribution for Cross-Lingual Question Answering
emnlp-main.10
2305.14332
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.emnlp-main.11.bib
https://aclanthology.org/2023.emnlp-main.11/
@inproceedings{oladipo-etal-2023-better, title = "Better Quality Pre-training Data and T5 Models for {A}frican Languages", author = "Oladipo, Akintunde and Adeyemi, Mofetoluwa and Ahia, Orevaoghene and Owodunni, Abraham and Ogundepo, Odunayo and Adelani, David and Lin, Jimmy", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.emnlp-main.11", doi = "10.18653/v1/2023.emnlp-main.11", pages = "158--168", abstract = "In this study, we highlight the importance of enhancing the quality of pretraining data in multilingual language models. Existing web crawls have demonstrated quality issues, particularly in the context of low-resource languages. Consequently, we introduce a new multilingual pretraining corpus for 16 African languages, designed by carefully auditing existing pretraining corpora to understand and rectify prevalent quality issues. To compile this dataset, we undertake a rigorous examination of current data sources for thirteen languages within one of the most extensive multilingual web crawls, mC4, and extract cleaner data through meticulous auditing and improved web crawling strategies. Subsequently, we pretrain a new T5-based model on this dataset and evaluate its performance on multiple downstream tasks. Our model demonstrates better downstream effectiveness over existing pretrained models across four NLP tasks, underscoring the critical role data quality plays in pretraining language models in low-resource scenarios. Specifically, on cross-lingual QA evaluation, our new model is more than twice as effective as multilingual T5. All code, data and models are publicly available at https://github.com/castorini/AfriTeVa-keji.", }
In this study, we highlight the importance of enhancing the quality of pretraining data in multilingual language models. Existing web crawls have demonstrated quality issues, particularly in the context of low-resource languages. Consequently, we introduce a new multilingual pretraining corpus for 16 African languages, designed by carefully auditing existing pretraining corpora to understand and rectify prevalent quality issues. To compile this dataset, we undertake a rigorous examination of current data sources for thirteen languages within one of the most extensive multilingual web crawls, mC4, and extract cleaner data through meticulous auditing and improved web crawling strategies. Subsequently, we pretrain a new T5-based model on this dataset and evaluate its performance on multiple downstream tasks. Our model demonstrates better downstream effectiveness over existing pretrained models across four NLP tasks, underscoring the critical role data quality plays in pretraining language models in low-resource scenarios. Specifically, on cross-lingual QA evaluation, our new model is more than twice as effective as multilingual T5. All code, data and models are publicly available at https://github.com/castorini/AfriTeVa-keji.
[ "Oladipo, Akintunde", "Adeyemi, Mofetoluwa", "Ahia, Orevaoghene", "Owodunni, Abraham", "Ogundepo, Odunayo", "Adelani, David", "Lin, Jimmy" ]
Better Quality Pre-training Data and T5 Models for African Languages
emnlp-main.11
null
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.emnlp-main.12.bib
https://aclanthology.org/2023.emnlp-main.12/
@inproceedings{tan-etal-2023-sparse, title = "Sparse Universal Transformer", author = "Tan, Shawn and Shen, Yikang and Chen, Zhenfang and Courville, Aaron and Gan, Chuang", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.emnlp-main.12", doi = "10.18653/v1/2023.emnlp-main.12", pages = "169--179", abstract = "The Universal Transformer (UT) is a variant of the Transformer that shares parameters across its layers and is Turing-complete under certain assumptions. Empirical evidence also shows that UTs have better compositional generalization than Vanilla Transformers (VTs) in formal language tasks. The parameter-sharing also affords it better parameter efficiency than VTs. Despite its many advantages, most state-of-the-art NLP systems use VTs as their backbone model instead of UTs. This is mainly because scaling UT parameters is more compute and memory intensive than scaling up a VT. This paper proposes the Sparse Universal Transformer (SUT), which leverages Sparse Mixture of Experts (SMoE) to reduce UT{'}s computation complexity while retaining its parameter efficiency and generalization ability. Experiments show that SUT combines the best of both worlds, achieving strong generalization results on formal language tasks (Logical inference and CFQ) and impressive parameter and computation efficiency on standard natural language benchmarks like WMT{'}14.", }
The Universal Transformer (UT) is a variant of the Transformer that shares parameters across its layers and is Turing-complete under certain assumptions. Empirical evidence also shows that UTs have better compositional generalization than Vanilla Transformers (VTs) in formal language tasks. The parameter-sharing also affords it better parameter efficiency than VTs. Despite its many advantages, most state-of-the-art NLP systems use VTs as their backbone model instead of UTs. This is mainly because scaling UT parameters is more compute and memory intensive than scaling up a VT. This paper proposes the Sparse Universal Transformer (SUT), which leverages Sparse Mixture of Experts (SMoE) to reduce UT{'}s computation complexity while retaining its parameter efficiency and generalization ability. Experiments show that SUT combines the best of both worlds, achieving strong generalization results on formal language tasks (Logical inference and CFQ) and impressive parameter and computation efficiency on standard natural language benchmarks like WMT{'}14.
[ "Tan, Shawn", "Shen, Yikang", "Chen, Zhenfang", "Courville, Aaron", "Gan, Chuang" ]
Sparse Universal Transformer
emnlp-main.12
2310.07096
[ "" ]
https://huggingface.co./papers/2310.07096
1
0
0
5
[]
[]
[]
1
Poster
https://aclanthology.org/2023.emnlp-main.13.bib
https://aclanthology.org/2023.emnlp-main.13/
@inproceedings{li-etal-2023-theory, title = "Theory of Mind for Multi-Agent Collaboration via Large Language Models", author = "Li, Huao and Chong, Yu and Stepputtis, Simon and Campbell, Joseph and Hughes, Dana and Lewis, Charles and Sycara, Katia", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.emnlp-main.13", doi = "10.18653/v1/2023.emnlp-main.13", pages = "180--192", abstract = "While Large Language Models (LLMs) have demonstrated impressive accomplishments in both reasoning and planning, their abilities in multi-agent collaborations remains largely unexplored. This study evaluates LLM-based agents in a multi-agent cooperative text game with Theory of Mind (ToM) inference tasks, comparing their performance with Multi-Agent Reinforcement Learning (MARL) and planning-based baselines. We observed evidence of emergent collaborative behaviors and high-order Theory of Mind capabilities among LLM-based agents. Our results reveal limitations in LLM-based agents{'} planning optimization due to systematic failures in managing long-horizon contexts and hallucination about the task state. We explore the use of explicit belief state representations to mitigate these issues, finding that it enhances task performance and the accuracy of ToM inferences for LLM-based agents.", }
While Large Language Models (LLMs) have demonstrated impressive accomplishments in both reasoning and planning, their abilities in multi-agent collaborations remains largely unexplored. This study evaluates LLM-based agents in a multi-agent cooperative text game with Theory of Mind (ToM) inference tasks, comparing their performance with Multi-Agent Reinforcement Learning (MARL) and planning-based baselines. We observed evidence of emergent collaborative behaviors and high-order Theory of Mind capabilities among LLM-based agents. Our results reveal limitations in LLM-based agents{'} planning optimization due to systematic failures in managing long-horizon contexts and hallucination about the task state. We explore the use of explicit belief state representations to mitigate these issues, finding that it enhances task performance and the accuracy of ToM inferences for LLM-based agents.
[ "Li, Huao", "Chong, Yu", "Stepputtis, Simon", "Campbell, Joseph", "Hughes, Dana", "Lewis, Charles", "Sycara, Katia" ]
Theory of Mind for Multi-Agent Collaboration via Large Language Models
emnlp-main.13
2310.10701
[ "https://github.com/romanlee6/multi_LLM_comm" ]
https://huggingface.co./papers/2310.10701
0
0
0
7
[]
[]
[ "agentharbor/agenta" ]
1
Poster
https://aclanthology.org/2023.emnlp-main.14.bib
https://aclanthology.org/2023.emnlp-main.14/
@inproceedings{litschko-etal-2023-establishing, title = "Establishing Trustworthiness: Rethinking Tasks and Model Evaluation", author = {Litschko, Robert and M{\"u}ller-Eberstein, Max and van der Goot, Rob and Weber-Genzel, Leon and Plank, Barbara}, editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.emnlp-main.14", doi = "10.18653/v1/2023.emnlp-main.14", pages = "193--203", abstract = "Language understanding is a multi-faceted cognitive capability, which the Natural Language Processing (NLP) community has striven to model computationally for decades. Traditionally, facets of linguistic intelligence have been compartmentalized into tasks with specialized model architectures and corresponding evaluation protocols. With the advent of large language models (LLMs) the community has witnessed a dramatic shift towards general purpose, task-agnostic approaches powered by generative models. As a consequence, the traditional compartmentalized notion of language tasks is breaking down, followed by an increasing challenge for evaluation and analysis. At the same time, LLMs are being deployed in more real-world scenarios, including previously unforeseen zero-shot setups, increasing the need for trustworthy and reliable systems. Therefore, we argue that it is time to rethink what constitutes tasks and model evaluation in NLP, and pursue a more holistic view on language, placing trustworthiness at the center. Towards this goal, we review existing compartmentalized approaches for understanding the origins of a model{'}s functional capacity, and provide recommendations for more multi-faceted evaluation protocols.", }
Language understanding is a multi-faceted cognitive capability, which the Natural Language Processing (NLP) community has striven to model computationally for decades. Traditionally, facets of linguistic intelligence have been compartmentalized into tasks with specialized model architectures and corresponding evaluation protocols. With the advent of large language models (LLMs) the community has witnessed a dramatic shift towards general purpose, task-agnostic approaches powered by generative models. As a consequence, the traditional compartmentalized notion of language tasks is breaking down, followed by an increasing challenge for evaluation and analysis. At the same time, LLMs are being deployed in more real-world scenarios, including previously unforeseen zero-shot setups, increasing the need for trustworthy and reliable systems. Therefore, we argue that it is time to rethink what constitutes tasks and model evaluation in NLP, and pursue a more holistic view on language, placing trustworthiness at the center. Towards this goal, we review existing compartmentalized approaches for understanding the origins of a model{'}s functional capacity, and provide recommendations for more multi-faceted evaluation protocols.
[ "Litschko, Robert", "M{\\\"u}ller-Eberstein, Max", "van der Goot, Rob", "Weber-Genzel, Leon", "Plank, Barbara" ]
Establishing Trustworthiness: Rethinking Tasks and Model Evaluation
emnlp-main.14
2310.05442
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.emnlp-main.15.bib
https://aclanthology.org/2023.emnlp-main.15/
@inproceedings{himakunthala-etal-2023-lets, title = "Let{'}s Think Frame by Frame with {VIP}: A Video Infilling and Prediction Dataset for Evaluating Video Chain-of-Thought", author = "Himakunthala, Vaishnavi and Ouyang, Andy and Rose, Daniel and He, Ryan and Mei, Alex and Lu, Yujie and Sonar, Chinmay and Saxon, Michael and Wang, William", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.emnlp-main.15", doi = "10.18653/v1/2023.emnlp-main.15", pages = "204--219", abstract = "Despite exciting recent results showing vision-language systems{'} capacity to reason about images using natural language, their capacity for video reasoning remains underexplored. We motivate framing video reasoning as the sequential understanding of a small number of keyframes, thereby leveraging the power and robustness of vision-language while alleviating the computational complexities of processing videos. To evaluate this novel application, we introduce VIP, an inference-time challenge dataset designed to explore models{'} reasoning capabilities through video chain-of-thought. Inspired by visually descriptive scene plays, we propose two formats for keyframe description: unstructured dense captions and structured scene descriptions that identify the focus, action, mood, objects, and setting (FAMOuS) of the keyframe. To evaluate video reasoning, we propose two tasks: Video Infilling and Video Prediction, which test abilities to generate multiple intermediate keyframes and predict future keyframes, respectively. We benchmark GPT-4, GPT-3, and VICUNA on VIP, demonstrate the performance gap in these complex video reasoning tasks, and encourage future work to prioritize language models for efficient and generalized video reasoning.", }
Despite exciting recent results showing vision-language systems{'} capacity to reason about images using natural language, their capacity for video reasoning remains underexplored. We motivate framing video reasoning as the sequential understanding of a small number of keyframes, thereby leveraging the power and robustness of vision-language while alleviating the computational complexities of processing videos. To evaluate this novel application, we introduce VIP, an inference-time challenge dataset designed to explore models{'} reasoning capabilities through video chain-of-thought. Inspired by visually descriptive scene plays, we propose two formats for keyframe description: unstructured dense captions and structured scene descriptions that identify the focus, action, mood, objects, and setting (FAMOuS) of the keyframe. To evaluate video reasoning, we propose two tasks: Video Infilling and Video Prediction, which test abilities to generate multiple intermediate keyframes and predict future keyframes, respectively. We benchmark GPT-4, GPT-3, and VICUNA on VIP, demonstrate the performance gap in these complex video reasoning tasks, and encourage future work to prioritize language models for efficient and generalized video reasoning.
[ "Himakunthala, Vaishnavi", "Ouyang, Andy", "Rose, Daniel", "He, Ryan", "Mei, Alex", "Lu, Yujie", "Sonar, Chinmay", "Saxon, Michael", "Wang, William" ]
Let's Think Frame by Frame with VIP: A Video Infilling and Prediction Dataset for Evaluating Video Chain-of-Thought
emnlp-main.15
2305.13903
[ "https://github.com/vaishnavihimakunthala/vip" ]
https://huggingface.co./papers/2305.13903
2
0
0
9
[]
[ "ryanhe/VIP" ]
[]
1
Poster
https://aclanthology.org/2023.emnlp-main.16.bib
https://aclanthology.org/2023.emnlp-main.16/
@inproceedings{khondaker-etal-2023-gptaraeval, title = "{GPTA}ra{E}val: A Comprehensive Evaluation of {C}hat{GPT} on {A}rabic {NLP}", author = "Khondaker, Md Tawkat Islam and Waheed, Abdul and Nagoudi, El Moatez Billah and Abdul-Mageed, Muhammad", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.emnlp-main.16", doi = "10.18653/v1/2023.emnlp-main.16", pages = "220--247", abstract = "ChatGPT{'}s emergence heralds a transformative phase in NLP, particularly demonstrated through its excellent performance on many English benchmarks. However, the model{'}s efficacy across diverse linguistic contexts remains largely uncharted territory. This work aims to bridge this knowledge gap, with a primary focus on assessing ChatGPT{'}s capabilities on Arabic languages and dialectal varieties. Our comprehensive study conducts a large-scale automated and human evaluation of ChatGPT, encompassing 44 distinct language understanding and generation tasks on over 60 different datasets. To our knowledge, this marks the first extensive performance analysis of ChatGPT{'}s deployment in Arabic NLP. Our findings indicate that, despite its remarkable performance in English, ChatGPT is consistently surpassed by smaller models that have undergone finetuning on Arabic. We further undertake a meticulous comparison of ChatGPT and GPT-4{'}s Modern Standard Arabic (MSA) and Dialectal Arabic (DA), unveiling the relative shortcomings of both models in handling Arabic dialects compared to MSA. Although we further explore and confirm the utility of employing GPT-4 as a potential alternative for human evaluation, our work adds to a growing body of research underscoring the limitations of ChatGPT.", }
ChatGPT{'}s emergence heralds a transformative phase in NLP, particularly demonstrated through its excellent performance on many English benchmarks. However, the model{'}s efficacy across diverse linguistic contexts remains largely uncharted territory. This work aims to bridge this knowledge gap, with a primary focus on assessing ChatGPT{'}s capabilities on Arabic languages and dialectal varieties. Our comprehensive study conducts a large-scale automated and human evaluation of ChatGPT, encompassing 44 distinct language understanding and generation tasks on over 60 different datasets. To our knowledge, this marks the first extensive performance analysis of ChatGPT{'}s deployment in Arabic NLP. Our findings indicate that, despite its remarkable performance in English, ChatGPT is consistently surpassed by smaller models that have undergone finetuning on Arabic. We further undertake a meticulous comparison of ChatGPT and GPT-4{'}s Modern Standard Arabic (MSA) and Dialectal Arabic (DA), unveiling the relative shortcomings of both models in handling Arabic dialects compared to MSA. Although we further explore and confirm the utility of employing GPT-4 as a potential alternative for human evaluation, our work adds to a growing body of research underscoring the limitations of ChatGPT.
[ "Khondaker, Md Tawkat Islam", "Waheed, Abdul", "Nagoudi, El Moatez Billah", "Abdul-Mageed, Muhammad" ]
GPTAraEval: A Comprehensive Evaluation of ChatGPT on Arabic NLP
emnlp-main.16
2305.14976
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Oral
https://aclanthology.org/2023.emnlp-main.17.bib
https://aclanthology.org/2023.emnlp-main.17/
@inproceedings{li-etal-2023-dual-channel, title = "Dual-Channel Span for Aspect Sentiment Triplet Extraction", author = "Li, Pan and Li, Ping and Zhang, Kai", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.emnlp-main.17", doi = "10.18653/v1/2023.emnlp-main.17", pages = "248--261", abstract = "Aspect Sentiment Triplet Extraction (ASTE) is one of the compound tasks of fine-grained aspect-based sentiment analysis (ABSA), aiming at extracting the triplets of aspect terms, corresponding opinion terms and the associated sentiment orientation. Recent efforts in exploiting span-level semantic interaction shown superior performance on ASTE task. However, most of the existing span-based approaches suffer from enumerating all possible spans, since it can introduce too much noise in sentiment triplet extraction. To ease this burden, we propose a dual-channel span generation method to coherently constrain the search space of span candidates. Specifically, we leverage the syntactic relations among aspect/opinion terms and the associated part-of-speech characteristics in those terms to generate span candidates, which reduces span enumeration by nearly half. Besides, feature representations are learned from syntactic and part-of-speech correlation among terms, which renders span representation fruitful linguistic information. Extensive experiments on two versions of public datasets demonstrate both the effectiveness of our design and the superiority on ASTE/ATE/OTE tasks.", }
Aspect Sentiment Triplet Extraction (ASTE) is one of the compound tasks of fine-grained aspect-based sentiment analysis (ABSA), aiming at extracting the triplets of aspect terms, corresponding opinion terms and the associated sentiment orientation. Recent efforts in exploiting span-level semantic interaction shown superior performance on ASTE task. However, most of the existing span-based approaches suffer from enumerating all possible spans, since it can introduce too much noise in sentiment triplet extraction. To ease this burden, we propose a dual-channel span generation method to coherently constrain the search space of span candidates. Specifically, we leverage the syntactic relations among aspect/opinion terms and the associated part-of-speech characteristics in those terms to generate span candidates, which reduces span enumeration by nearly half. Besides, feature representations are learned from syntactic and part-of-speech correlation among terms, which renders span representation fruitful linguistic information. Extensive experiments on two versions of public datasets demonstrate both the effectiveness of our design and the superiority on ASTE/ATE/OTE tasks.
[ "Li, Pan", "Li, Ping", "Zhang, Kai" ]
Dual-Channel Span for Aspect Sentiment Triplet Extraction
emnlp-main.17
null
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.emnlp-main.18.bib
https://aclanthology.org/2023.emnlp-main.18/
@inproceedings{li-zhang-2023-cultural, title = "Cultural Concept Adaptation on Multimodal Reasoning", author = "Li, Zhi and Zhang, Yin", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.emnlp-main.18", doi = "10.18653/v1/2023.emnlp-main.18", pages = "262--276", abstract = "Developing cultural adaptation methods is important, which can improve the model performance on the low-resource ones and provide more equitable opportunities for everyone to benefit from advanced technology. Past methods primarily focused on multilingual and multimodal capabilities, and the improvement of multicultural competence is still an unexplored problem. This is largely due to the difficulty of data scarcity and expensive annotation. In this paper, we navigate this uncharted territory by leveraging high-resource cultures to facilitate comprehension of low-resource ones. We first introduce an annotation-free method for cultural-concept adaptation and construct a concept mapping set. To facilitate the model{'}s comprehension of cultural-concept mappings, we propose a new multimodal data augmentation called CultureMixup. This approach employs a three-tier code-switching strategy on textual sentences. Additionally, it uses a cultural concept-based mixup method for the images. This combination effectively generates new data instances across culture, phrase, word, and image levels. For visually grounded reasoning across languages and cultures, experimental results on five languages show that our method consistently improves performance for four existing multilingual and multimodal models on both zero-shot and few-shot settings.", }
Developing cultural adaptation methods is important, which can improve the model performance on the low-resource ones and provide more equitable opportunities for everyone to benefit from advanced technology. Past methods primarily focused on multilingual and multimodal capabilities, and the improvement of multicultural competence is still an unexplored problem. This is largely due to the difficulty of data scarcity and expensive annotation. In this paper, we navigate this uncharted territory by leveraging high-resource cultures to facilitate comprehension of low-resource ones. We first introduce an annotation-free method for cultural-concept adaptation and construct a concept mapping set. To facilitate the model{'}s comprehension of cultural-concept mappings, we propose a new multimodal data augmentation called CultureMixup. This approach employs a three-tier code-switching strategy on textual sentences. Additionally, it uses a cultural concept-based mixup method for the images. This combination effectively generates new data instances across culture, phrase, word, and image levels. For visually grounded reasoning across languages and cultures, experimental results on five languages show that our method consistently improves performance for four existing multilingual and multimodal models on both zero-shot and few-shot settings.
[ "Li, Zhi", "Zhang, Yin" ]
Cultural Concept Adaptation on Multimodal Reasoning
emnlp-main.18
null
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Oral
https://aclanthology.org/2023.emnlp-main.19.bib
https://aclanthology.org/2023.emnlp-main.19/
@inproceedings{samir-silfverberg-2023-understanding, title = "Understanding Compositional Data Augmentation in Typologically Diverse Morphological Inflection", author = "Samir, Farhan and Silfverberg, Miikka", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.emnlp-main.19", doi = "10.18653/v1/2023.emnlp-main.19", pages = "277--291", abstract = "Data augmentation techniques are widely used in low-resource automatic morphological inflection to address the issue of data sparsity. However, the full implications of these techniques remain poorly understood. In this study, we aim to shed light on the theoretical aspects of the data augmentation strategy StemCorrupt, a method that generates synthetic examples by randomly substituting stem characters in existing gold standard training examples. Our analysis uncovers that StemCorrupt brings about fundamental changes in the underlying data distribution, revealing inherent compositional concatenative structure. To complement our theoretical analysis, we investigate the data-efficiency of StemCorrupt. Through evaluation across a diverse set of seven typologically distinct languages, we demonstrate that selecting a subset of datapoints with both high diversity \textit{and} high predictive uncertainty significantly enhances the data-efficiency of compared to competitive baselines. Furthermore, we explore the impact of typological features on the choice of augmentation strategy and find that languages incorporating non-concatenativity, such as morphonological alternations, derive less benefit from synthetic examples with high predictive uncertainty. We attribute this effect to phonotactic violations induced by StemCorrupt, emphasizing the need for further research to ensure optimal performance across the entire spectrum of natural language morphology.", }
Data augmentation techniques are widely used in low-resource automatic morphological inflection to address the issue of data sparsity. However, the full implications of these techniques remain poorly understood. In this study, we aim to shed light on the theoretical aspects of the data augmentation strategy StemCorrupt, a method that generates synthetic examples by randomly substituting stem characters in existing gold standard training examples. Our analysis uncovers that StemCorrupt brings about fundamental changes in the underlying data distribution, revealing inherent compositional concatenative structure. To complement our theoretical analysis, we investigate the data-efficiency of StemCorrupt. Through evaluation across a diverse set of seven typologically distinct languages, we demonstrate that selecting a subset of datapoints with both high diversity \textit{and} high predictive uncertainty significantly enhances the data-efficiency of compared to competitive baselines. Furthermore, we explore the impact of typological features on the choice of augmentation strategy and find that languages incorporating non-concatenativity, such as morphonological alternations, derive less benefit from synthetic examples with high predictive uncertainty. We attribute this effect to phonotactic violations induced by StemCorrupt, emphasizing the need for further research to ensure optimal performance across the entire spectrum of natural language morphology.
[ "Samir, Farhan", "Silfverberg, Miikka" ]
Understanding Compositional Data Augmentation in Typologically Diverse Morphological Inflection
emnlp-main.19
2305.13658
[ "https://github.com/smfsamir/understanding-augmentation-morphology" ]
-1
-1
-1
-1
[]
[]
[]
0
Oral
https://aclanthology.org/2023.emnlp-main.20.bib
https://aclanthology.org/2023.emnlp-main.20/
@inproceedings{li-etal-2023-evaluating, title = "Evaluating Object Hallucination in Large Vision-Language Models", author = "Li, Yifan and Du, Yifan and Zhou, Kun and Wang, Jinpeng and Zhao, Xin and Wen, Ji-Rong", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.emnlp-main.20", doi = "10.18653/v1/2023.emnlp-main.20", pages = "292--305", abstract = "Inspired by the superior language abilities of large language models (LLM), large vision-language models (LVLM) have been recently proposed by integrating powerful LLMs for improving the performance on complex multimodal tasks. Despite the promising progress on LVLMs, we find that they suffer from object hallucinations, i.e., they tend to generate objects inconsistent with the target images in the descriptions. To investigate it, this work presents the first systematic study on object hallucination of LVLMs. We conduct the evaluation experiments on several representative LVLMs, and show that they mostly suffer from severe object hallucination issues. We further discuss that the visual instructions may influence the hallucination, and find that: objects that frequently appear in the visual instructions or co-occur with the image objects are obviously prone to be hallucinated by LVLMs. Besides, we further design a polling-based query method called POPE for better evaluation of object hallucination. Experiment results show that our POPE can evaluate object hallucination in a more stable and flexible way.", }
Inspired by the superior language abilities of large language models (LLM), large vision-language models (LVLM) have been recently proposed by integrating powerful LLMs for improving the performance on complex multimodal tasks. Despite the promising progress on LVLMs, we find that they suffer from object hallucinations, i.e., they tend to generate objects inconsistent with the target images in the descriptions. To investigate it, this work presents the first systematic study on object hallucination of LVLMs. We conduct the evaluation experiments on several representative LVLMs, and show that they mostly suffer from severe object hallucination issues. We further discuss that the visual instructions may influence the hallucination, and find that: objects that frequently appear in the visual instructions or co-occur with the image objects are obviously prone to be hallucinated by LVLMs. Besides, we further design a polling-based query method called POPE for better evaluation of object hallucination. Experiment results show that our POPE can evaluate object hallucination in a more stable and flexible way.
[ "Li, Yifan", "Du, Yifan", "Zhou, Kun", "Wang, Jinpeng", "Zhao, Xin", "Wen, Ji-Rong" ]
Evaluating Object Hallucination in Large Vision-Language Models
emnlp-main.20
2305.10355
[ "https://github.com/rucaibox/pope" ]
https://huggingface.co./papers/2305.10355
0
0
0
6
[ "google/paligemma-3b-pt-224", "google/paligemma-3b-pt-896", "google/paligemma-3b-mix-448", "google/paligemma-3b-mix-224", "google/paligemma-3b-pt-448", "google/paligemma-3b-ft-ocrvqa-896", "google/paligemma-3b-ft-vqav2-448", "google/paligemma-3b-ft-refcoco-seg-896", "google/paligemma-3b-ft-ocrvqa-448", "google/paligemma-3b-ft-cococap-448", "google/paligemma-3b-ft-ai2d-224-jax", "google/paligemma-3b-ft-docvqa-896", "google/paligemma-3b-ft-vizwizvqa-448-jax", "google/paligemma-3b-ft-vqav2-224", "google/paligemma-3b-ft-widgetcap-448", "google/paligemma-3b-ft-rsvqa-hr-224", "google/paligemma-3b-pt-896-jax", "google/paligemma-3b-ft-vqav2-224-jax", "google/paligemma-3b-ft-docvqa-896-jax", "google/paligemma-3b-ft-widgetcap-448-jax", "google/paligemma-3b-ft-vizwizvqa-448", "google/paligemma-3b-ft-nlvr2-224", "google/paligemma-3b-ft-refcoco-seg-448-jax", "google/paligemma-3b-ft-vqav2-448-jax", "google/paligemma-3b-ft-okvqa-224", "google/paligemma-3b-ft-ocrvqa-224", "google/paligemma-3b-ft-cococap-224", "google/paligemma-3b-ft-ocrvqa-896-jax", "google/paligemma-3b-ft-science-qa-224", "google/paligemma-3b-ft-textvqa-896-jax", "google/paligemma-3b-ft-coco35l-224", "google/paligemma-3b-ft-docvqa-448", "google/paligemma-3b-ft-science-qa-448", "google/paligemma-3b-ft-widgetcap-224", "google/paligemma-3b-ft-textcaps-448", "google/paligemma-3b-ft-textvqa-896", "leo009/paligemma-3b-mix-224", "google/paligemma-3b-ft-infovqa-896-jax", "google/paligemma-3b-ft-screen2words-224", "google/paligemma-3b-ft-screen2words-448-jax", "google/paligemma-3b-ft-textvqa-448", "google/paligemma-3b-ft-rsvqa-lr-224-jax", "google/paligemma-3b-ft-tallyqa-224-jax", "google/paligemma-3b-ft-refcoco-seg-896-jax", "google/paligemma-3b-ft-scicap-224", "google/paligemma-3b-ft-okvqa-224-jax", "google/paligemma-3b-ft-nlvr2-448-jax", "google/paligemma-3b-ft-science-qa-224-jax", "google/paligemma-3b-ft-infovqa-896", "google/paligemma-3b-ft-docvqa-448-jax", "google/paligemma-3b-ft-gqa-448", "google/paligemma-3b-ft-okvqa-448-jax", "google/paligemma-3b-ft-textcaps-224", "google/paligemma-3b-ft-rsvqa-hr-448-jax", "google/paligemma-3b-ft-tallyqa-448-jax", "google/paligemma-3b-ft-tallyqa-224", "google/paligemma-3b-ft-stvqa-448-jax", "google/paligemma-3b-ft-stvqa-224-jax", "google/paligemma-3b-ft-ai2d-224", "google/paligemma-3b-ft-widgetcap-224-jax", "google/paligemma-3b-ft-aokvqa-da-224-jax", "google/paligemma-3b-ft-refcoco-seg-224-jax", "google/paligemma-3b-ft-nlvr2-448", "google/paligemma-3b-ft-infovqa-448", "google/paligemma-3b-ft-coco35l-224-jax", "google/paligemma-3b-ft-scicap-224-jax", "google/paligemma-3b-ft-aokvqa-da-448", "google/paligemma-3b-ft-tallyqa-448", "google/paligemma-3b-ft-cococap-448-jax", "google/paligemma-3b-ft-stvqa-896", "google/paligemma-3b-ft-vizwizvqa-224", "google/paligemma-3b-ft-aokvqa-mc-224-jax", "google/paligemma-3b-ft-gqa-448-jax", "google/paligemma-3b-ft-docvqa-224", "google/paligemma-3b-ft-cococap-224-jax", "google/paligemma-3b-ft-gqa-224", "google/paligemma-3b-ft-textvqa-448-jax", "google/paligemma-3b-ft-aokvqa-mc-448", "google/paligemma-3b-ft-ai2d-448-jax", "google/paligemma-3b-ft-coco35l-448-jax", "google/paligemma-3b-ft-rsvqa-hr-448", "google/paligemma-3b-ft-refcoco-seg-224", "google/paligemma-3b-ft-scicap-448", "google/paligemma-3b-ft-aokvqa-da-224", "google/paligemma-3b-ft-science-qa-448-jax", "google/paligemma-3b-ft-gqa-224-jax", "google/paligemma-3b-ft-infovqa-224", "google/paligemma-3b-ft-scicap-448-jax", "google/paligemma-3b-ft-aokvqa-mc-224", "google/paligemma-3b-ft-stvqa-224", "google/paligemma-3b-ft-stvqa-448", "google/paligemma-3b-ft-infovqa-448-jax", "google/paligemma-3b-ft-textvqa-224-jax", "google/paligemma-3b-ft-coco35l-448", "google/paligemma-3b-ft-refcoco-seg-448", "google/paligemma-3b-ft-aokvqa-mc-448-jax", "hermanhelf/paligemma", "google/paligemma-3b-ft-screen2words-448", "google/paligemma-3b-ft-okvqa-448", "google/paligemma-3b-ft-rsvqa-lr-224" ]
[ "HuggingFaceM4/POPE_modif" ]
[ "big-vision/paligemma-hf", "manu/ColPali-demo", "merve/paligemma-doc", "merve/paligemma-tracking", "agentsea/paligemma-waveui", "Justinrune/LLaMA-Factory", "Saee/vQA-exploration", "dwb2023/model_explorer2", "dwb2023/model_explorer4", "rynmurdock/Blue_Tigers", "beingcognitive/Image_to_Music", "dwb2023/hf_extractor", "Scharbhen/paligemma-vqa", "NSTiwari/PaliGemma-ZeroShotDetection-Video", "kenken999/fastapi_django_main_live", "LEAHWA/Artificial_Intel_project", "dwb2023/omniscience", "triphuong57/paligemma_finetune", "triphuong57/paligemma_finetune_v2", "triphuong57/paligemma_ft_v1", "taufiqdp/paligemma", "hermanhelf/paligemma-hf", "gabrielaltay/vlmqa", "HUANG-Stephanie/cvquest-colpali", "anthony-chen/Chem-210-Autograder", "mattraj/curacel-demo-1", "mattraj/curacel-demo-2" ]
1
Poster
https://aclanthology.org/2023.emnlp-main.21.bib
https://aclanthology.org/2023.emnlp-main.21/
@inproceedings{cao-etal-2023-event, title = "Event Ontology Completion with Hierarchical Structure Evolution Networks", author = "Cao, Pengfei and Hao, Yupu and Chen, Yubo and Liu, Kang and Xu, Jiexin and Li, Huaijun and Jiang, Xiaojian and Zhao, Jun", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.emnlp-main.21", doi = "10.18653/v1/2023.emnlp-main.21", pages = "306--320", abstract = "Traditional event detection methods require predefined event schemas. However, manually defining event schemas is expensive and the coverage of schemas is limited. To this end, some works study the event type induction (ETI) task, which discovers new event types via clustering. However, the setting of ETI suffers from two limitations: event types are not linked into the existing hierarchy and have no semantic names. In this paper, we propose a new research task named Event Ontology Completion (EOC), which aims to simultaneously achieve event clustering, hierarchy expansion and type naming. Furthermore, we develop a Hierarchical Structure Evolution Network (HalTon) for this new task. Specifically, we first devise a Neighborhood Contrastive Clustering module to cluster unlabeled event instances. Then, we propose a Hierarchy-Aware Linking module to incorporate the hierarchical information for event expansion. Finally, we generate meaningful names for new types via an In-Context Learning-based Naming module. Extensive experiments indicate that our method achieves the best performance, outperforming the baselines by 8.23{\%}, 8.79{\%} and 8.10{\%} of ARI score on three datasets.", }
Traditional event detection methods require predefined event schemas. However, manually defining event schemas is expensive and the coverage of schemas is limited. To this end, some works study the event type induction (ETI) task, which discovers new event types via clustering. However, the setting of ETI suffers from two limitations: event types are not linked into the existing hierarchy and have no semantic names. In this paper, we propose a new research task named Event Ontology Completion (EOC), which aims to simultaneously achieve event clustering, hierarchy expansion and type naming. Furthermore, we develop a Hierarchical Structure Evolution Network (HalTon) for this new task. Specifically, we first devise a Neighborhood Contrastive Clustering module to cluster unlabeled event instances. Then, we propose a Hierarchy-Aware Linking module to incorporate the hierarchical information for event expansion. Finally, we generate meaningful names for new types via an In-Context Learning-based Naming module. Extensive experiments indicate that our method achieves the best performance, outperforming the baselines by 8.23{\%}, 8.79{\%} and 8.10{\%} of ARI score on three datasets.
[ "Cao, Pengfei", "Hao, Yupu", "Chen, Yubo", "Liu, Kang", "Xu, Jiexin", "Li, Huaijun", "Jiang, Xiaojian", "Zhao, Jun" ]
Event Ontology Completion with Hierarchical Structure Evolution Networks
emnlp-main.21
null
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.emnlp-main.22.bib
https://aclanthology.org/2023.emnlp-main.22/
@inproceedings{jin-etal-2023-parameter, title = "Parameter-efficient Tuning for Large Language Model without Calculating Its Gradients", author = "Jin, Feihu and Zhang, Jiajun and Zong, Chengqing", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.emnlp-main.22", doi = "10.18653/v1/2023.emnlp-main.22", pages = "321--330", abstract = "Fine-tuning all parameters of large language models (LLMs) requires significant computational resources and is time-consuming. Recent parameter-efficient tuning methods such as Adapter tuning, Prefix tuning, and LoRA allow for updating a small subset of parameters in large language models. However, they can only save approximately 30{\%} of the training memory requirements, due to the problem that gradient computation and backpropagation are still necessary for these methods. This paper proposes a novel parameter-efficient tuning method for LLMs without calculating their gradients. Leveraging the discernible similarities between the parameter-efficient modules of the same task learned by both large and small language models, we put forward a strategy for transferring the parameter-efficient modules, originally derived from small language models to much larger ones. To ensure a smooth and effective adaptation process, we further introduce a Bridge model to guarantee dimensional consistency while also stimulating a dynamic interaction between the models. We demonstrate the effectiveness of our method using the T5 and GPT-2 series of language models on the SuperGLUE benchmark. Our method achieves comparable performance to both fine-tuning and parameter-efficient tuning on large language models without needing gradient-based optimization. Additionally, our method achieves up to 5.7x memory reduction compared to parameter-efficient tuning.", }
Fine-tuning all parameters of large language models (LLMs) requires significant computational resources and is time-consuming. Recent parameter-efficient tuning methods such as Adapter tuning, Prefix tuning, and LoRA allow for updating a small subset of parameters in large language models. However, they can only save approximately 30{\%} of the training memory requirements, due to the problem that gradient computation and backpropagation are still necessary for these methods. This paper proposes a novel parameter-efficient tuning method for LLMs without calculating their gradients. Leveraging the discernible similarities between the parameter-efficient modules of the same task learned by both large and small language models, we put forward a strategy for transferring the parameter-efficient modules, originally derived from small language models to much larger ones. To ensure a smooth and effective adaptation process, we further introduce a Bridge model to guarantee dimensional consistency while also stimulating a dynamic interaction between the models. We demonstrate the effectiveness of our method using the T5 and GPT-2 series of language models on the SuperGLUE benchmark. Our method achieves comparable performance to both fine-tuning and parameter-efficient tuning on large language models without needing gradient-based optimization. Additionally, our method achieves up to 5.7x memory reduction compared to parameter-efficient tuning.
[ "Jin, Feihu", "Zhang, Jiajun", "Zong, Chengqing" ]
Parameter-efficient Tuning for Large Language Model without Calculating Its Gradients
emnlp-main.22
null
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.emnlp-main.23.bib
https://aclanthology.org/2023.emnlp-main.23/
@inproceedings{lei-huang-2023-discourse, title = "Discourse Structures Guided Fine-grained Propaganda Identification", author = "Lei, Yuanyuan and Huang, Ruihong", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.emnlp-main.23", doi = "10.18653/v1/2023.emnlp-main.23", pages = "331--342", abstract = "Propaganda is a form of deceptive narratives that instigate or mislead the public, usually with a political purpose. In this paper, we aim to identify propaganda in political news at two fine-grained levels: sentence-level and token-level. We observe that propaganda content is more likely to be embedded in sentences that attribute causality or assert contrast to nearby sentences, as well as seen in opinionated evaluation, speculation and discussions of future expectation. Hence, we propose to incorporate both local and global discourse structures for propaganda discovery and construct two teacher models for identifying PDTB-style discourse relations between nearby sentences and common discourse roles of sentences in a news article respectively. We further devise two methods to incorporate the two types of discourse structures for propaganda identification by either using teacher predicted probabilities as additional features or soliciting guidance in a knowledge distillation framework. Experiments on the benchmark dataset demonstrate that leveraging guidance from discourse structures can significantly improve both precision and recall of propaganda content identification.", }
Propaganda is a form of deceptive narratives that instigate or mislead the public, usually with a political purpose. In this paper, we aim to identify propaganda in political news at two fine-grained levels: sentence-level and token-level. We observe that propaganda content is more likely to be embedded in sentences that attribute causality or assert contrast to nearby sentences, as well as seen in opinionated evaluation, speculation and discussions of future expectation. Hence, we propose to incorporate both local and global discourse structures for propaganda discovery and construct two teacher models for identifying PDTB-style discourse relations between nearby sentences and common discourse roles of sentences in a news article respectively. We further devise two methods to incorporate the two types of discourse structures for propaganda identification by either using teacher predicted probabilities as additional features or soliciting guidance in a knowledge distillation framework. Experiments on the benchmark dataset demonstrate that leveraging guidance from discourse structures can significantly improve both precision and recall of propaganda content identification.
[ "Lei, Yuanyuan", "Huang, Ruihong" ]
Discourse Structures Guided Fine-grained Propaganda Identification
emnlp-main.23
2310.18544
[ "https://github.com/yuanyuanlei-nlp/propaganda_emnlp_2023" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.emnlp-main.24.bib
https://aclanthology.org/2023.emnlp-main.24/
@inproceedings{minixhofer-etal-2023-compoundpiece, title = "{C}ompound{P}iece: Evaluating and Improving Decompounding Performance of Language Models", author = "Minixhofer, Benjamin and Pfeiffer, Jonas and Vuli{\'c}, Ivan", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.emnlp-main.24", doi = "10.18653/v1/2023.emnlp-main.24", pages = "343--359", abstract = "While many languages possess processes of joining two or more words to create compound words, previous studies have been typically limited only to languages with excessively productive compound formation (e.g., German, Dutch) and there is no public dataset containing compound and non-compound words across a large number of languages. In this work, we systematically study decompounding, the task of splitting compound words into their constituents, at a wide scale. We first address the data gap by introducing a dataset of 255k compound and non-compound words across 56 diverse languages obtained from Wiktionary. We then use this dataset to evaluate an array of Large Language Models (LLMs) on the decompounding task. We find that LLMs perform poorly, especially on words which are tokenized unfavorably by subword tokenization. We thus introduce a novel methodology to train dedicated models for decompounding. The proposed two-stage procedure relies on a fully self-supervised objective in the first stage, while the second, supervised learning stage optionally fine-tunes the model on the annotated Wiktionary data. Our self-supervised models outperform the prior best unsupervised decompounding models by 13.9{\%} accuracy on average. Our fine-tuned models outperform all prior (language-specific) decompounding tools. Furthermore, we use our models to leverage decompounding during the creation of a subword tokenizer, which we refer to as CompoundPiece. CompoundPiece tokenizes compound words more favorably on average, leading to improved performance on decompounding over an otherwise equivalent model using SentencePiece tokenization.", }
While many languages possess processes of joining two or more words to create compound words, previous studies have been typically limited only to languages with excessively productive compound formation (e.g., German, Dutch) and there is no public dataset containing compound and non-compound words across a large number of languages. In this work, we systematically study decompounding, the task of splitting compound words into their constituents, at a wide scale. We first address the data gap by introducing a dataset of 255k compound and non-compound words across 56 diverse languages obtained from Wiktionary. We then use this dataset to evaluate an array of Large Language Models (LLMs) on the decompounding task. We find that LLMs perform poorly, especially on words which are tokenized unfavorably by subword tokenization. We thus introduce a novel methodology to train dedicated models for decompounding. The proposed two-stage procedure relies on a fully self-supervised objective in the first stage, while the second, supervised learning stage optionally fine-tunes the model on the annotated Wiktionary data. Our self-supervised models outperform the prior best unsupervised decompounding models by 13.9{\%} accuracy on average. Our fine-tuned models outperform all prior (language-specific) decompounding tools. Furthermore, we use our models to leverage decompounding during the creation of a subword tokenizer, which we refer to as CompoundPiece. CompoundPiece tokenizes compound words more favorably on average, leading to improved performance on decompounding over an otherwise equivalent model using SentencePiece tokenization.
[ "Minixhofer, Benjamin", "Pfeiffer, Jonas", "Vuli{\\'c}, Ivan" ]
CompoundPiece: Evaluating and Improving Decompounding Performance of Language Models
emnlp-main.24
2305.14214
[ "https://github.com/bminixhofer/compoundpiece" ]
https://huggingface.co./papers/2305.14214
1
0
0
3
[ "benjamin/compoundpiece", "benjamin/compoundpiece-stage1" ]
[ "benjamin/compoundpiece" ]
[]
1
Poster
https://aclanthology.org/2023.emnlp-main.25.bib
https://aclanthology.org/2023.emnlp-main.25/
@inproceedings{wang-etal-2023-improving, title = "Improving Image Captioning via Predicting Structured Concepts", author = "Wang, Ting and Chen, Weidong and Tian, Yuanhe and Song, Yan and Mao, Zhendong", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.emnlp-main.25", doi = "10.18653/v1/2023.emnlp-main.25", pages = "360--370", abstract = "Having the difficulty of solving the semantic gap between images and texts for the image captioning task, conventional studies in this area paid some attention to treating semantic concepts as a bridge between the two modalities and improved captioning performance accordingly. Although promising results on concept prediction were obtained, the aforementioned studies normally ignore the relationship among concepts, which relies on not only objects in the image, but also word dependencies in the text, so that offers a considerable potential for improving the process of generating good descriptions. In this paper, we propose a structured concept predictor (SCP) to predict concepts and their structures, then we integrate them into captioning, so that enhance the contribution of visual signals in this task via concepts and further use their relations to distinguish cross-modal semantics for better description generation. Particularly, we design weighted graph convolutional networks (W-GCN) to depict concept relations driven by word dependencies, and then learns differentiated contributions from these concepts for following decoding process. Therefore, our approach captures potential relations among concepts and discriminatively learns different concepts, so that effectively facilitates image captioning with inherited information across modalities. Extensive experiments and their results demonstrate the effectiveness of our approach as well as each proposed module in this work.", }
Having the difficulty of solving the semantic gap between images and texts for the image captioning task, conventional studies in this area paid some attention to treating semantic concepts as a bridge between the two modalities and improved captioning performance accordingly. Although promising results on concept prediction were obtained, the aforementioned studies normally ignore the relationship among concepts, which relies on not only objects in the image, but also word dependencies in the text, so that offers a considerable potential for improving the process of generating good descriptions. In this paper, we propose a structured concept predictor (SCP) to predict concepts and their structures, then we integrate them into captioning, so that enhance the contribution of visual signals in this task via concepts and further use their relations to distinguish cross-modal semantics for better description generation. Particularly, we design weighted graph convolutional networks (W-GCN) to depict concept relations driven by word dependencies, and then learns differentiated contributions from these concepts for following decoding process. Therefore, our approach captures potential relations among concepts and discriminatively learns different concepts, so that effectively facilitates image captioning with inherited information across modalities. Extensive experiments and their results demonstrate the effectiveness of our approach as well as each proposed module in this work.
[ "Wang, Ting", "Chen, Weidong", "Tian, Yuanhe", "Song, Yan", "Mao, Zhendong" ]
Improving Image Captioning via Predicting Structured Concepts
emnlp-main.25
2311.08223
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Oral
https://aclanthology.org/2023.emnlp-main.26.bib
https://aclanthology.org/2023.emnlp-main.26/
@inproceedings{jones-etal-2023-gatitos, title = "{GATITOS}: Using a New Multilingual Lexicon for Low-resource Machine Translation", author = "Jones, Alexander and Caswell, Isaac and Firat, Orhan and Saxena, Ishank", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.emnlp-main.26", doi = "10.18653/v1/2023.emnlp-main.26", pages = "371--405", abstract = "Modern machine translation models and language models are able to translate without having been trained on parallel data, greatly expanding the set of languages that they can serve. However, these models still struggle in a variety of predictable ways, a problem that cannot be overcome without at least some trusted bilingual data. This work expands on a cheap and abundant resource to combat this problem: bilingual lexica. We test the efficacy of bilingual lexica in a real-world set-up, on 200-language translation models trained on web-crawled text. We present several findings: (1) using lexical data augmentation, we demonstrate sizable performance gains for unsupervised translation; (2) we compare several families of data augmentation, demonstrating that they yield similar improvements, and can be combined for even greater improvements; (3) we demonstrate the importance of carefully curated lexica over larger, noisier ones, especially with larger models; and (4) we compare the efficacy of multilingual lexicon data versus human-translated parallel data. Based on results from (3), we develop and open-source GATITOS, a high-quality, curated dataset in 168 tail languages, one of the first human-translated resources to cover many of these languages.", }
Modern machine translation models and language models are able to translate without having been trained on parallel data, greatly expanding the set of languages that they can serve. However, these models still struggle in a variety of predictable ways, a problem that cannot be overcome without at least some trusted bilingual data. This work expands on a cheap and abundant resource to combat this problem: bilingual lexica. We test the efficacy of bilingual lexica in a real-world set-up, on 200-language translation models trained on web-crawled text. We present several findings: (1) using lexical data augmentation, we demonstrate sizable performance gains for unsupervised translation; (2) we compare several families of data augmentation, demonstrating that they yield similar improvements, and can be combined for even greater improvements; (3) we demonstrate the importance of carefully curated lexica over larger, noisier ones, especially with larger models; and (4) we compare the efficacy of multilingual lexicon data versus human-translated parallel data. Based on results from (3), we develop and open-source GATITOS, a high-quality, curated dataset in 168 tail languages, one of the first human-translated resources to cover many of these languages.
[ "Jones, Alex", "er", "Caswell, Isaac", "Firat, Orhan", "Saxena, Ishank" ]
GATITOS: Using a New Multilingual Lexicon for Low-resource Machine Translation
emnlp-main.26
null
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.emnlp-main.27.bib
https://aclanthology.org/2023.emnlp-main.27/
@inproceedings{gao-etal-2023-continually, title = "Continually Improving Extractive {QA} via Human Feedback", author = "Gao, Ge and Chen, Hung-Ting and Artzi, Yoav and Choi, Eunsol", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.emnlp-main.27", doi = "10.18653/v1/2023.emnlp-main.27", pages = "406--423", abstract = "We study continually improving an extractive question answering (QA) system via human user feedback. We design and deploy an iterative approach, where information-seeking users ask questions, receive model-predicted answers, and provide feedback. We conduct experiments involving thousands of user interactions under diverse setups to broaden the understanding of learning from feedback over time. Our experiments show effective improvement from user feedback of extractive QA models over time across different data regimes, including significant potential for domain adaptation.", }
We study continually improving an extractive question answering (QA) system via human user feedback. We design and deploy an iterative approach, where information-seeking users ask questions, receive model-predicted answers, and provide feedback. We conduct experiments involving thousands of user interactions under diverse setups to broaden the understanding of learning from feedback over time. Our experiments show effective improvement from user feedback of extractive QA models over time across different data regimes, including significant potential for domain adaptation.
[ "Gao, Ge", "Chen, Hung-Ting", "Artzi, Yoav", "Choi, Eunsol" ]
Continually Improving Extractive QA via Human Feedback
emnlp-main.27
2305.12473
[ "https://github.com/lil-lab/qa-from-hf" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.emnlp-main.28.bib
https://aclanthology.org/2023.emnlp-main.28/
@inproceedings{chen-etal-2023-using, title = "Using Interpretation Methods for Model Enhancement", author = "Chen, Zhuo and Jiang, Chengyue and Tu, Kewei", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.emnlp-main.28", doi = "10.18653/v1/2023.emnlp-main.28", pages = "424--438", abstract = "In the age of neural natural language processing, there are plenty of works trying to derive interpretations of neural models. Intuitively, when gold rationales exist during training, one can additionally train the model to match its interpretation with the rationales. However, this intuitive idea has not been fully explored. In this paper, we propose a framework of utilizing interpretation methods and gold rationales to enhance models. Our framework is very general in the sense that it can incorporate various interpretation methods. Previously proposed gradient-based methods can be shown as an instance of our framework. We also propose two novel instances utilizing two other types of interpretation methods, erasure/replace-based and extractor-based methods, for model enhancement. We conduct comprehensive experiments on a variety of tasks. Experimental results show that our framework is effective especially in low-resource settings in enhancing models with various interpretation methods, and our two newly-proposed methods outperform gradient-based methods in most settings. Code is available at https://github.com/Chord-Chen-30/UIMER.", }
In the age of neural natural language processing, there are plenty of works trying to derive interpretations of neural models. Intuitively, when gold rationales exist during training, one can additionally train the model to match its interpretation with the rationales. However, this intuitive idea has not been fully explored. In this paper, we propose a framework of utilizing interpretation methods and gold rationales to enhance models. Our framework is very general in the sense that it can incorporate various interpretation methods. Previously proposed gradient-based methods can be shown as an instance of our framework. We also propose two novel instances utilizing two other types of interpretation methods, erasure/replace-based and extractor-based methods, for model enhancement. We conduct comprehensive experiments on a variety of tasks. Experimental results show that our framework is effective especially in low-resource settings in enhancing models with various interpretation methods, and our two newly-proposed methods outperform gradient-based methods in most settings. Code is available at https://github.com/Chord-Chen-30/UIMER.
[ "Chen, Zhuo", "Jiang, Chengyue", "Tu, Kewei" ]
Using Interpretation Methods for Model Enhancement
emnlp-main.28
2404.02068
[ "https://github.com/chord-chen-30/uimer" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.emnlp-main.29.bib
https://aclanthology.org/2023.emnlp-main.29/
@inproceedings{zhang-etal-2023-expression, title = "An Expression Tree Decoding Strategy for Mathematical Equation Generation", author = "Zhang, Wenqi and Shen, Yongliang and Nong, Qingpeng and Tan, Zeqi and Ma, Yanna and Lu, Weiming", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.emnlp-main.29", doi = "10.18653/v1/2023.emnlp-main.29", pages = "439--456", abstract = "Generating mathematical equations from natural language requires an accurate understanding of the relations among math expressions. Existing approaches can be broadly categorized into token-level and expression-level generation. The former treats equations as a mathematical language, sequentially generating math tokens. Expression-level methods generate each expression one by one. However, each expression represents a solving step, and there naturally exist parallel or dependent relations between these steps, which are ignored by current sequential methods. Therefore, we integrate tree structure into the expression-level generation and advocate an expression tree decoding strategy. To generate a tree with expression as its node, we employ a layer-wise parallel decoding strategy: we decode multiple independent expressions (leaf nodes) in parallel at each layer and repeat parallel decoding layer by layer to sequentially generate these parent node expressions that depend on others. Besides, a bipartite matching algorithm is adopted to align multiple predictions with annotations for each layer. Experiments show our method outperforms other baselines, especially for these equations with complex structures.", }
Generating mathematical equations from natural language requires an accurate understanding of the relations among math expressions. Existing approaches can be broadly categorized into token-level and expression-level generation. The former treats equations as a mathematical language, sequentially generating math tokens. Expression-level methods generate each expression one by one. However, each expression represents a solving step, and there naturally exist parallel or dependent relations between these steps, which are ignored by current sequential methods. Therefore, we integrate tree structure into the expression-level generation and advocate an expression tree decoding strategy. To generate a tree with expression as its node, we employ a layer-wise parallel decoding strategy: we decode multiple independent expressions (leaf nodes) in parallel at each layer and repeat parallel decoding layer by layer to sequentially generate these parent node expressions that depend on others. Besides, a bipartite matching algorithm is adopted to align multiple predictions with annotations for each layer. Experiments show our method outperforms other baselines, especially for these equations with complex structures.
[ "Zhang, Wenqi", "Shen, Yongliang", "Nong, Qingpeng", "Tan, Zeqi", "Ma, Yanna", "Lu, Weiming" ]
An Expression Tree Decoding Strategy for Mathematical Equation Generation
emnlp-main.29
2310.09619
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.emnlp-main.30.bib
https://aclanthology.org/2023.emnlp-main.30/
@inproceedings{yang-etal-2023-bootstrapping, title = "Bootstrapping Small {\&} High Performance Language Models with Unmasking-Removal Training Policy", author = "Yang, Yahan and Sulem, Elior and Lee, Insup and Roth, Dan", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.emnlp-main.30", doi = "10.18653/v1/2023.emnlp-main.30", pages = "457--464", abstract = "BabyBERTa, a language model trained on small-scale child-directed speech while none of the words are unmasked during training, has been shown to achieve a level of grammaticality comparable to that of RoBERTa-base, which is trained on 6,000 times more words and 15 times more parameters. Relying on this promising result, we explore in this paper the performance of BabyBERTa-based models in downstream tasks, focusing on Semantic Role Labeling (SRL) and two Extractive Question Answering tasks, with the aim of building more efficient systems that rely on less data and smaller models. We investigate the influence of these models both alone and as a starting point to larger pre-trained models, separately examining the contribution of the pre-training data, the vocabulary, and the masking policy on the downstream task performance. Our results show that BabyBERTa trained with unmasking-removal policy is a much stronger starting point for downstream tasks compared to the use of RoBERTa masking policy when 10M words are used for training and that this tendency persists, although to a lesser extent, when adding more training data.", }
BabyBERTa, a language model trained on small-scale child-directed speech while none of the words are unmasked during training, has been shown to achieve a level of grammaticality comparable to that of RoBERTa-base, which is trained on 6,000 times more words and 15 times more parameters. Relying on this promising result, we explore in this paper the performance of BabyBERTa-based models in downstream tasks, focusing on Semantic Role Labeling (SRL) and two Extractive Question Answering tasks, with the aim of building more efficient systems that rely on less data and smaller models. We investigate the influence of these models both alone and as a starting point to larger pre-trained models, separately examining the contribution of the pre-training data, the vocabulary, and the masking policy on the downstream task performance. Our results show that BabyBERTa trained with unmasking-removal policy is a much stronger starting point for downstream tasks compared to the use of RoBERTa masking policy when 10M words are used for training and that this tendency persists, although to a lesser extent, when adding more training data.
[ "Yang, Yahan", "Sulem, Elior", "Lee, Insup", "Roth, Dan" ]
Bootstrapping Small & High Performance Language Models with Unmasking-Removal Training Policy
emnlp-main.30
null
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.emnlp-main.31.bib
https://aclanthology.org/2023.emnlp-main.31/
@inproceedings{yoon-bak-2023-diversity, title = "Diversity Enhanced Narrative Question Generation for Storybooks", author = "Yoon, Hokeun and Bak, JinYeong", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.emnlp-main.31", doi = "10.18653/v1/2023.emnlp-main.31", pages = "465--482", abstract = "Question generation (QG) from a given context can enhance comprehension, engagement, assessment, and overall efficacy in learning or conversational environments. Despite recent advancements in QG, the challenge of enhancing or measuring the diversity of generated questions often remains unaddressed. In this paper, we introduce a multi-question generation model (mQG), which is capable of generating multiple, diverse, and answerable questions by focusing on context and questions. To validate the answerability of the generated questions, we employ a SQuAD 2.0 fine-tuned question answering model, classifying the questions as answerable or not. We train and evaluate mQG on the FairytaleQA dataset, a well-structured QA dataset based on storybooks, with narrative questions. We further apply a zero-shot adaptation on the TellMeWhy and SQuAD1.1 datasets. mQG shows promising results across various evaluation metrics, among strong baselines.", }
Question generation (QG) from a given context can enhance comprehension, engagement, assessment, and overall efficacy in learning or conversational environments. Despite recent advancements in QG, the challenge of enhancing or measuring the diversity of generated questions often remains unaddressed. In this paper, we introduce a multi-question generation model (mQG), which is capable of generating multiple, diverse, and answerable questions by focusing on context and questions. To validate the answerability of the generated questions, we employ a SQuAD 2.0 fine-tuned question answering model, classifying the questions as answerable or not. We train and evaluate mQG on the FairytaleQA dataset, a well-structured QA dataset based on storybooks, with narrative questions. We further apply a zero-shot adaptation on the TellMeWhy and SQuAD1.1 datasets. mQG shows promising results across various evaluation metrics, among strong baselines.
[ "Yoon, Hokeun", "Bak, JinYeong" ]
Diversity Enhanced Narrative Question Generation for Storybooks
emnlp-main.31
2310.16446
[ "https://github.com/hkyoon95/mqg" ]
https://huggingface.co./papers/2310.16446
0
0
0
2
[]
[]
[]
1
Oral
https://aclanthology.org/2023.emnlp-main.32.bib
https://aclanthology.org/2023.emnlp-main.32/
@inproceedings{dong-etal-2023-debiasing, title = "Debiasing Made State-of-the-art: Revisiting the Simple Seed-based Weak Supervision for Text Classification", author = "Dong, Chengyu and Wang, Zihan and Shang, Jingbo", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.emnlp-main.32", doi = "10.18653/v1/2023.emnlp-main.32", pages = "483--493", abstract = "Recent advances in weakly supervised text classification mostly focus on designing sophisticated methods to turn high-level human heuristics into quality pseudo-labels. In this paper, we revisit the seed matching-based method, which is arguably the simplest way to generate pseudo-labels, and show that its power was greatly underestimated. We show that the limited performance of seed matching is largely due to the label bias injected by the simple seed-match rule, which prevents the classifier from learning reliable confidence for selecting high-quality pseudo-labels. Interestingly, simply deleting the seed words present in the matched input texts can mitigate the label bias and help learn better confidence. Subsequently, the performance achieved by seed matching can be improved significantly, making it on par with or even better than the state-of-the-art. Furthermore, to handle the case when the seed words are not made known, we propose to simply delete the word tokens in the input text randomly with a high deletion ratio. Remarkably, seed matching equipped with this random deletion method can often achieve even better performance than that with seed deletion.", }
Recent advances in weakly supervised text classification mostly focus on designing sophisticated methods to turn high-level human heuristics into quality pseudo-labels. In this paper, we revisit the seed matching-based method, which is arguably the simplest way to generate pseudo-labels, and show that its power was greatly underestimated. We show that the limited performance of seed matching is largely due to the label bias injected by the simple seed-match rule, which prevents the classifier from learning reliable confidence for selecting high-quality pseudo-labels. Interestingly, simply deleting the seed words present in the matched input texts can mitigate the label bias and help learn better confidence. Subsequently, the performance achieved by seed matching can be improved significantly, making it on par with or even better than the state-of-the-art. Furthermore, to handle the case when the seed words are not made known, we propose to simply delete the word tokens in the input text randomly with a high deletion ratio. Remarkably, seed matching equipped with this random deletion method can often achieve even better performance than that with seed deletion.
[ "Dong, Chengyu", "Wang, Zihan", "Shang, Jingbo" ]
Debiasing Made State-of-the-art: Revisiting the Simple Seed-based Weak Supervision for Text Classification
emnlp-main.32
2305.14794
[ "https://github.com/shwinshaker/simseed" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.emnlp-main.33.bib
https://aclanthology.org/2023.emnlp-main.33/
@inproceedings{chen-etal-2023-enhance, title = "How to Enhance Causal Discrimination of Utterances: A Case on Affective Reasoning", author = "Chen, Hang and Yang, Xinyu and Luo, Jing and Zhu, Wenjing", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.emnlp-main.33", doi = "10.18653/v1/2023.emnlp-main.33", pages = "494--512", abstract = "Our investigation into the Affective Reasoning in Conversation (ARC) task highlights the challenge of causal discrimination. Almost all existing models, including large language models (LLMs), excel at capturing semantic correlations within utterance embeddings but fall short in determining the specific causal relationships. To overcome this limitation, we propose the incorporation of \textit{i.i.d.} noise terms into the conversation process, thereby constructing a structural causal model (SCM). It explores how distinct causal relationships of fitted embeddings can be discerned through independent conditions. To facilitate the implementation of deep learning, we introduce the cogn frameworks to handle unstructured conversation data, and employ an autoencoder architecture to regard the unobservable noise as learnable {``}implicit causes.{''} Moreover, we curate a synthetic dataset that includes i.i.d. noise. Through comprehensive experiments, we validate the effectiveness and interpretability of our approach. Our code is available in https://github.com/Zodiark-ch/mater-of-our-EMNLP2023-paper.", }
Our investigation into the Affective Reasoning in Conversation (ARC) task highlights the challenge of causal discrimination. Almost all existing models, including large language models (LLMs), excel at capturing semantic correlations within utterance embeddings but fall short in determining the specific causal relationships. To overcome this limitation, we propose the incorporation of \textit{i.i.d.} noise terms into the conversation process, thereby constructing a structural causal model (SCM). It explores how distinct causal relationships of fitted embeddings can be discerned through independent conditions. To facilitate the implementation of deep learning, we introduce the cogn frameworks to handle unstructured conversation data, and employ an autoencoder architecture to regard the unobservable noise as learnable {``}implicit causes.{''} Moreover, we curate a synthetic dataset that includes i.i.d. noise. Through comprehensive experiments, we validate the effectiveness and interpretability of our approach. Our code is available in https://github.com/Zodiark-ch/mater-of-our-EMNLP2023-paper.
[ "Chen, Hang", "Yang, Xinyu", "Luo, Jing", "Zhu, Wenjing" ]
How to Enhance Causal Discrimination of Utterances: A Case on Affective Reasoning
emnlp-main.33
2305.02615
[ "https://github.com/zodiark-ch/mater-of-our-emnlp2023-paper" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.emnlp-main.34.bib
https://aclanthology.org/2023.emnlp-main.34/
@inproceedings{si-etal-2023-compressing, title = "Compressing and Debiasing Vision-Language Pre-Trained Models for Visual Question Answering", author = "Si, Qingyi and Liu, Yuanxin and Lin, Zheng and Fu, Peng and Cao, Yanan and Wang, Weiping", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.emnlp-main.34", doi = "10.18653/v1/2023.emnlp-main.34", pages = "513--529", abstract = "Despite the excellent performance of vision-language pre-trained models (VLPs) on conventional VQA task, they still suffer from two problems: First, VLPs tend to rely on language biases in datasets and fail to generalize to out-of-distribution (OOD) data. Second, they are inefficient in terms of memory footprint and computation. Although promising progress has been made in both problems, most existing works tackle them independently. To facilitate the application of VLP to VQA tasks, it is imperative to jointly study VLP compression and OOD robustness, which, however, has not yet been explored. This paper investigates whether a VLP can be compressed and debiased simultaneously by searching sparse and robust subnetworks. To this end, we systematically study the design of a training and compression pipeline to search the subnetworks, as well as the assignment of sparsity to different modality-specific modules. Our experiments involve 2 VLPs, 2 compression methods, 4 training methods, 2 datasets and a range of sparsity levels. Our results show that there indeed exist sparse and robust subnetworks, which are competitive with the debiased full VLP and clearly outperform the debiasing SoTAs with fewer parameters on OOD datasets VQA-CP v2 and VQA-VS. The codes can be found at https://github.com/PhoebusSi/Compress-Robust-VQA.", }
Despite the excellent performance of vision-language pre-trained models (VLPs) on conventional VQA task, they still suffer from two problems: First, VLPs tend to rely on language biases in datasets and fail to generalize to out-of-distribution (OOD) data. Second, they are inefficient in terms of memory footprint and computation. Although promising progress has been made in both problems, most existing works tackle them independently. To facilitate the application of VLP to VQA tasks, it is imperative to jointly study VLP compression and OOD robustness, which, however, has not yet been explored. This paper investigates whether a VLP can be compressed and debiased simultaneously by searching sparse and robust subnetworks. To this end, we systematically study the design of a training and compression pipeline to search the subnetworks, as well as the assignment of sparsity to different modality-specific modules. Our experiments involve 2 VLPs, 2 compression methods, 4 training methods, 2 datasets and a range of sparsity levels. Our results show that there indeed exist sparse and robust subnetworks, which are competitive with the debiased full VLP and clearly outperform the debiasing SoTAs with fewer parameters on OOD datasets VQA-CP v2 and VQA-VS. The codes can be found at https://github.com/PhoebusSi/Compress-Robust-VQA.
[ "Si, Qingyi", "Liu, Yuanxin", "Lin, Zheng", "Fu, Peng", "Cao, Yanan", "Wang, Weiping" ]
Compressing and Debiasing Vision-Language Pre-Trained Models for Visual Question Answering
emnlp-main.34
2210.14558
[ "https://github.com/phoebussi/compress-robust-vqa" ]
-1
-1
-1
-1
[]
[]
[]
0
Oral
https://aclanthology.org/2023.emnlp-main.35.bib
https://aclanthology.org/2023.emnlp-main.35/
@inproceedings{cole-etal-2023-selectively, title = "Selectively Answering Ambiguous Questions", author = "Cole, Jeremy and Zhang, Michael and Gillick, Daniel and Eisenschlos, Julian and Dhingra, Bhuwan and Eisenstein, Jacob", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.emnlp-main.35", doi = "10.18653/v1/2023.emnlp-main.35", pages = "530--543", abstract = "Trustworthy language models should abstain from answering questions when they do not know the answer. However, the answer to a question can be unknown for a variety of reasons. Prior research has focused on the case in which the question is clear and the answer is unambiguous but possibly unknown. However, the answer to a question can also be unclear due to uncertainty of the questioner{'}s intent or context. We investigate question answering from this perspective, focusing on answering a subset of questions with a high degree of accuracy, from a set of questions in which many are inherently ambiguous. In this setting, we find that the most reliable approach to calibration involves quantifying repetition within a set of sampled model outputs, rather than the model{'}s likelihood or self-verification as used in prior work. We find this to be the case across different types of uncertainty, varying model scales and both with or without instruction tuning. Our results suggest that sampling-based confidence scores help calibrate answers to relatively unambiguous questions, with more dramatic improvements on ambiguous questions.", }
Trustworthy language models should abstain from answering questions when they do not know the answer. However, the answer to a question can be unknown for a variety of reasons. Prior research has focused on the case in which the question is clear and the answer is unambiguous but possibly unknown. However, the answer to a question can also be unclear due to uncertainty of the questioner{'}s intent or context. We investigate question answering from this perspective, focusing on answering a subset of questions with a high degree of accuracy, from a set of questions in which many are inherently ambiguous. In this setting, we find that the most reliable approach to calibration involves quantifying repetition within a set of sampled model outputs, rather than the model{'}s likelihood or self-verification as used in prior work. We find this to be the case across different types of uncertainty, varying model scales and both with or without instruction tuning. Our results suggest that sampling-based confidence scores help calibrate answers to relatively unambiguous questions, with more dramatic improvements on ambiguous questions.
[ "Cole, Jeremy", "Zhang, Michael", "Gillick, Daniel", "Eisenschlos, Julian", "Dhingra, Bhuwan", "Eisenstein, Jacob" ]
Selectively Answering Ambiguous Questions
emnlp-main.35
2305.14613
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.emnlp-main.36.bib
https://aclanthology.org/2023.emnlp-main.36/
@inproceedings{lee-etal-2023-temporal, title = "Temporal Knowledge Graph Forecasting Without Knowledge Using In-Context Learning", author = "Lee, Dong-Ho and Ahrabian, Kian and Jin, Woojeong and Morstatter, Fred and Pujara, Jay", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.emnlp-main.36", doi = "10.18653/v1/2023.emnlp-main.36", pages = "544--557", abstract = "Temporal knowledge graph (TKG) forecasting benchmarks challenge models to predict future facts using knowledge of past facts. In this paper, we develop an approach to use in-context learning (ICL) with large language models (LLMs) for TKG forecasting. Our extensive evaluation compares diverse baselines, including both simple heuristics and state-of-the-art (SOTA) supervised models, against pre-trained LLMs across several popular benchmarks and experimental settings. We observe that naive LLMs perform on par with SOTA models, which employ carefully designed architectures and supervised training for the forecasting task, falling within the (-3.6{\%}, +1.5{\%}) Hits@1 margin relative to the median performance. To better understand the strengths of LLMs for forecasting, we explore different approaches for selecting historical facts, constructing prompts, controlling information propagation, and parsing outputs into a probability distribution. A surprising finding from our experiments is that LLM performance endures ($\pm$0.4{\%} Hit@1) even when semantic information is removed by mapping entities/relations to arbitrary numbers, suggesting that prior semantic knowledge is unnecessary; rather, LLMs can leverage the symbolic patterns in the context to achieve such a strong performance. Our analysis also reveals that ICL enables LLMs to learn irregular patterns from the historical context, going beyond frequency and recency biases", }
Temporal knowledge graph (TKG) forecasting benchmarks challenge models to predict future facts using knowledge of past facts. In this paper, we develop an approach to use in-context learning (ICL) with large language models (LLMs) for TKG forecasting. Our extensive evaluation compares diverse baselines, including both simple heuristics and state-of-the-art (SOTA) supervised models, against pre-trained LLMs across several popular benchmarks and experimental settings. We observe that naive LLMs perform on par with SOTA models, which employ carefully designed architectures and supervised training for the forecasting task, falling within the (-3.6{\%}, +1.5{\%}) Hits@1 margin relative to the median performance. To better understand the strengths of LLMs for forecasting, we explore different approaches for selecting historical facts, constructing prompts, controlling information propagation, and parsing outputs into a probability distribution. A surprising finding from our experiments is that LLM performance endures ($\pm$0.4{\%} Hit@1) even when semantic information is removed by mapping entities/relations to arbitrary numbers, suggesting that prior semantic knowledge is unnecessary; rather, LLMs can leverage the symbolic patterns in the context to achieve such a strong performance. Our analysis also reveals that ICL enables LLMs to learn irregular patterns from the historical context, going beyond frequency and recency biases
[ "Lee, Dong-Ho", "Ahrabian, Kian", "Jin, Woojeong", "Morstatter, Fred", "Pujara, Jay" ]
Temporal Knowledge Graph Forecasting Without Knowledge Using In-Context Learning
emnlp-main.36
2305.10613
[ "https://github.com/usc-isi-i2/isi-tkg-icl" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.emnlp-main.37.bib
https://aclanthology.org/2023.emnlp-main.37/
@inproceedings{hwang-etal-2023-knowledge, title = "Knowledge Graph Compression Enhances Diverse Commonsense Generation", author = "Hwang, EunJeong and Thost, Veronika and Shwartz, Vered and Ma, Tengfei", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.emnlp-main.37", doi = "10.18653/v1/2023.emnlp-main.37", pages = "558--572", abstract = "Generating commonsense explanations requires reasoning about commonsense knowledge beyond what is explicitly mentioned in the context. Existing models use commonsense knowledge graphs such as ConceptNet to extract a subgraph of relevant knowledge pertaining to concepts in the input. However, due to the large coverage and, consequently, vast scale of ConceptNet, the extracted subgraphs may contain loosely related, redundant and irrelevant information, which can introduce noise into the model. We propose to address this by applying a differentiable graph compression algorithm that focuses on the relevant knowledge for the task. The compressed subgraphs yield considerably more diverse outputs when incorporated into models for the tasks of generating commonsense and abductive explanations. Moreover, our model achieves better quality-diversity tradeoff than a large language model with 100 times the number of parameters. Our generic approach can be applied to additional NLP tasks that can benefit from incorporating external knowledge.", }
Generating commonsense explanations requires reasoning about commonsense knowledge beyond what is explicitly mentioned in the context. Existing models use commonsense knowledge graphs such as ConceptNet to extract a subgraph of relevant knowledge pertaining to concepts in the input. However, due to the large coverage and, consequently, vast scale of ConceptNet, the extracted subgraphs may contain loosely related, redundant and irrelevant information, which can introduce noise into the model. We propose to address this by applying a differentiable graph compression algorithm that focuses on the relevant knowledge for the task. The compressed subgraphs yield considerably more diverse outputs when incorporated into models for the tasks of generating commonsense and abductive explanations. Moreover, our model achieves better quality-diversity tradeoff than a large language model with 100 times the number of parameters. Our generic approach can be applied to additional NLP tasks that can benefit from incorporating external knowledge.
[ "Hwang, EunJeong", "Thost, Veronika", "Shwartz, Vered", "Ma, Tengfei" ]
Knowledge Graph Compression Enhances Diverse Commonsense Generation
emnlp-main.37
null
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.emnlp-main.38.bib
https://aclanthology.org/2023.emnlp-main.38/
@inproceedings{li-etal-2023-pragmatic, title = "Pragmatic Reasoning Unlocks Quantifier Semantics for Foundation Models", author = "Li, Yiyuan and Menon, Rakesh and Ghosh, Sayan and Srivastava, Shashank", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.emnlp-main.38", doi = "10.18653/v1/2023.emnlp-main.38", pages = "573--591", abstract = "Generalized quantifiers (e.g., $\textit{few}$, $\textit{most}$) are used to indicate the proportions predicates satisfy (for example, $\textit{some}$ apples are red). One way to interpret quantifier semantics is to explicitly bind these satisfactions with percentage scopes (e.g., 30{\%}-40{\%} of apples are red). This approach can be helpful for tasks like logic formalization and surface-form quantitative reasoning (Gordon and Schubert, 2010; Roy et al., 2015). However, it remains unclear if recent foundation models (Bommasani et al., 2021) possess this ability due to the absence of direct training signals. To explore this, we introduce QuRe, a crowd-sourced dataset of human-annotated generalized quantifiers in Wikipedia sentences featuring percentage-equipped predicates. We explore quantifier comprehension using PRESQUE, a framework that combines natural language inference and the Rational Speech Acts framework. Experimental results on the HVD dataset (Herbelot and Vecchi, 2015) and QuRe demonstrate PRESQUE{'}s superiority over a literal listener baseline, showing a 20{\%} relative improvement in F1 in predicting percentage scopes for quantifiers, even with no additional training.", }
Generalized quantifiers (e.g., $\textit{few}$, $\textit{most}$) are used to indicate the proportions predicates satisfy (for example, $\textit{some}$ apples are red). One way to interpret quantifier semantics is to explicitly bind these satisfactions with percentage scopes (e.g., 30{\%}-40{\%} of apples are red). This approach can be helpful for tasks like logic formalization and surface-form quantitative reasoning (Gordon and Schubert, 2010; Roy et al., 2015). However, it remains unclear if recent foundation models (Bommasani et al., 2021) possess this ability due to the absence of direct training signals. To explore this, we introduce QuRe, a crowd-sourced dataset of human-annotated generalized quantifiers in Wikipedia sentences featuring percentage-equipped predicates. We explore quantifier comprehension using PRESQUE, a framework that combines natural language inference and the Rational Speech Acts framework. Experimental results on the HVD dataset (Herbelot and Vecchi, 2015) and QuRe demonstrate PRESQUE{'}s superiority over a literal listener baseline, showing a 20{\%} relative improvement in F1 in predicting percentage scopes for quantifiers, even with no additional training.
[ "Li, Yiyuan", "Menon, Rakesh", "Ghosh, Sayan", "Srivastava, Shashank" ]
Pragmatic Reasoning Unlocks Quantifier Semantics for Foundation Models
emnlp-main.38
2311.04659
[ "https://github.com/nativeatom/presque" ]
https://huggingface.co./papers/2311.04659
0
0
0
4
[]
[ "billli/QuRe" ]
[]
1
Oral
https://aclanthology.org/2023.emnlp-main.39.bib
https://aclanthology.org/2023.emnlp-main.39/
@inproceedings{liu-etal-2023-llm, title = "{LLM}-{FP}4: 4-Bit Floating-Point Quantized Transformers", author = "Liu, Shih-yang and Liu, Zechun and Huang, Xijie and Dong, Pingcheng and Cheng, Kwang-Ting", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.emnlp-main.39", doi = "10.18653/v1/2023.emnlp-main.39", pages = "592--605", abstract = "We propose LLM-FP4 for quantizing both weights and activations in large language models (LLMs) down to 4-bit floating-point values, in a post-training manner. Existing post-training quantization (PTQ) solutions are primarily integer-based and struggle with bit widths below 8 bits. Compared to integer quantization, floating-point (FP) quantization is more flexible and can better handle long-tail or bell-shaped distributions, and it has emerged as a default choice in many hardware platforms. One characteristic of FP quantization is that its performance largely depends on the choice of exponent bits and clipping range. In this regard, we construct a strong FP-PTQ baseline by searching for the optimal quantization parameters. Furthermore, we observe a high inter-channel variance and low intra-channel variance pattern in activation distributions, which adds activation quantization difficulty. We recognize this pattern to be consistent across a spectrum of transformer models designed for diverse tasks such as LLMs, BERT, and Vision Transformer models. To tackle this, we propose per-channel activation quantization and show that these additional scaling factors can be reparameterized as exponential biases of weights, incurring a negligible cost. Our method, for the first time, can quantize both weights and activations in the LLaMA-13B to only 4-bit and achieves an average score of 63.1 on the common sense zero-shot reasoning tasks, which is only 5.8 lower than the full-precision model, significantly outperforming the previous state-of-the-art by 12.7 points. Code is available at: https://github.com/nbasyl/LLM-FP4.", }
We propose LLM-FP4 for quantizing both weights and activations in large language models (LLMs) down to 4-bit floating-point values, in a post-training manner. Existing post-training quantization (PTQ) solutions are primarily integer-based and struggle with bit widths below 8 bits. Compared to integer quantization, floating-point (FP) quantization is more flexible and can better handle long-tail or bell-shaped distributions, and it has emerged as a default choice in many hardware platforms. One characteristic of FP quantization is that its performance largely depends on the choice of exponent bits and clipping range. In this regard, we construct a strong FP-PTQ baseline by searching for the optimal quantization parameters. Furthermore, we observe a high inter-channel variance and low intra-channel variance pattern in activation distributions, which adds activation quantization difficulty. We recognize this pattern to be consistent across a spectrum of transformer models designed for diverse tasks such as LLMs, BERT, and Vision Transformer models. To tackle this, we propose per-channel activation quantization and show that these additional scaling factors can be reparameterized as exponential biases of weights, incurring a negligible cost. Our method, for the first time, can quantize both weights and activations in the LLaMA-13B to only 4-bit and achieves an average score of 63.1 on the common sense zero-shot reasoning tasks, which is only 5.8 lower than the full-precision model, significantly outperforming the previous state-of-the-art by 12.7 points. Code is available at: https://github.com/nbasyl/LLM-FP4.
[ "Liu, Shih-yang", "Liu, Zechun", "Huang, Xijie", "Dong, Pingcheng", "Cheng, Kwang-Ting" ]
LLM-FP4: 4-Bit Floating-Point Quantized Transformers
emnlp-main.39
2310.16836
[ "https://github.com/nbasyl/llm-fp4" ]
https://huggingface.co./papers/2310.16836
3
13
0
5
[]
[]
[]
1
Poster
https://aclanthology.org/2023.emnlp-main.40.bib
https://aclanthology.org/2023.emnlp-main.40/
@inproceedings{tang-etal-2023-improving, title = "Improving Biomedical Abstractive Summarisation with Knowledge Aggregation from Citation Papers", author = "Tang, Chen and Wang, Shun and Goldsack, Tomas and Lin, Chenghua", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.emnlp-main.40", doi = "10.18653/v1/2023.emnlp-main.40", pages = "606--618", abstract = "Abstracts derived from biomedical literature possess distinct domain-specific characteristics, including specialised writing styles and biomedical terminologies, which necessitate a deep understanding of the related literature. As a result, existing language models struggle to generate technical summaries that are on par with those produced by biomedical experts, given the absence of domain-specific background knowledge. This paper aims to enhance the performance of language models in biomedical abstractive summarisation by aggregating knowledge from external papers cited within the source article. We propose a novel attention-based citation aggregation model that integrates domain-specific knowledge from citation papers, allowing neural networks to generate summaries by leveraging both the paper content and relevant knowledge from citation papers. Furthermore, we construct and release a large-scale biomedical summarisation dataset that serves as a foundation for our research. Extensive experiments demonstrate that our model outperforms state-of-the-art approaches and achieves substantial improvements in abstractive biomedical text summarisation.", }
Abstracts derived from biomedical literature possess distinct domain-specific characteristics, including specialised writing styles and biomedical terminologies, which necessitate a deep understanding of the related literature. As a result, existing language models struggle to generate technical summaries that are on par with those produced by biomedical experts, given the absence of domain-specific background knowledge. This paper aims to enhance the performance of language models in biomedical abstractive summarisation by aggregating knowledge from external papers cited within the source article. We propose a novel attention-based citation aggregation model that integrates domain-specific knowledge from citation papers, allowing neural networks to generate summaries by leveraging both the paper content and relevant knowledge from citation papers. Furthermore, we construct and release a large-scale biomedical summarisation dataset that serves as a foundation for our research. Extensive experiments demonstrate that our model outperforms state-of-the-art approaches and achieves substantial improvements in abstractive biomedical text summarisation.
[ "Tang, Chen", "Wang, Shun", "Goldsack, Tomas", "Lin, Chenghua" ]
Improving Biomedical Abstractive Summarisation with Knowledge Aggregation from Citation Papers
emnlp-main.40
2310.15684
[ "https://github.com/tangg555/biomed-sum" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.emnlp-main.41.bib
https://aclanthology.org/2023.emnlp-main.41/
@inproceedings{ye-durrett-2023-explanation, title = "Explanation Selection Using Unlabeled Data for Chain-of-Thought Prompting", author = "Ye, Xi and Durrett, Greg", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.emnlp-main.41", doi = "10.18653/v1/2023.emnlp-main.41", pages = "619--637", abstract = "Recent work has shown how to prompt large language models with explanations to obtain strong performance on textual reasoning tasks, i.e., the chain-of-thought paradigm. However, subtly different explanations can yield widely varying downstream task accuracy. Explanations that have not been {``}tuned{''} for a task, such as off-the-shelf explanations written by non-experts, may lead to mediocre performance. This paper tackles the problem of how to optimize explanation-infused prompts in a blackbox fashion. We first generate sets of candidate explanations for each example in the prompt using a leave-one-out scheme, then find an effective combination of these explanations with a two-stage framework. We first evaluate explanations for each in-context example in isolation according to two proxy metrics, log likelihood and accuracy on new examples. Then, we search over combinations of explanations to find one that yields high performance against a silver-labeled development set. Across four textual reasoning tasks spanning question answering, mathematical reasoning, and natural language inference, results show that our proxy metrics correlate with ground truth accuracy and our overall method can effectively improve prompts over crowdworker annotations and naive search strategies", }
Recent work has shown how to prompt large language models with explanations to obtain strong performance on textual reasoning tasks, i.e., the chain-of-thought paradigm. However, subtly different explanations can yield widely varying downstream task accuracy. Explanations that have not been {``}tuned{''} for a task, such as off-the-shelf explanations written by non-experts, may lead to mediocre performance. This paper tackles the problem of how to optimize explanation-infused prompts in a blackbox fashion. We first generate sets of candidate explanations for each example in the prompt using a leave-one-out scheme, then find an effective combination of these explanations with a two-stage framework. We first evaluate explanations for each in-context example in isolation according to two proxy metrics, log likelihood and accuracy on new examples. Then, we search over combinations of explanations to find one that yields high performance against a silver-labeled development set. Across four textual reasoning tasks spanning question answering, mathematical reasoning, and natural language inference, results show that our proxy metrics correlate with ground truth accuracy and our overall method can effectively improve prompts over crowdworker annotations and naive search strategies
[ "Ye, Xi", "Durrett, Greg" ]
Explanation Selection Using Unlabeled Data for Chain-of-Thought Prompting
emnlp-main.41
2302.04813
[ "https://github.com/xiye17/explselection" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.emnlp-main.42.bib
https://aclanthology.org/2023.emnlp-main.42/
@inproceedings{dale-etal-2023-halomi, title = "{H}al{O}mi: A Manually Annotated Benchmark for Multilingual Hallucination and Omission Detection in Machine Translation", author = "Dale, David and Voita, Elena and Lam, Janice and Hansanti, Prangthip and Ropers, Christophe and Kalbassi, Elahe and Gao, Cynthia and Barrault, Loic and Costa-juss{\`a}, Marta", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.emnlp-main.42", doi = "10.18653/v1/2023.emnlp-main.42", pages = "638--653", abstract = "Hallucinations in machine translation are translations that contain information completely unrelated to the input. Omissions are translations that do not include some of the input information. While both cases tend to be catastrophic errors undermining user trust, annotated data with these types of pathologies is extremely scarce and is limited to a few high-resource languages. In this work, we release an annotated dataset for the hallucination and omission phenomena covering 18 translation directions with varying resource levels and scripts. Our annotation covers different levels of partial and full hallucinations as well as omissions both at the sentence and at the word level. Additionally, we revisit previous methods for hallucination and omission detection, show that conclusions made based on a single language pair largely do not hold for a large-scale evaluation, and establish new solid baselines.", }
Hallucinations in machine translation are translations that contain information completely unrelated to the input. Omissions are translations that do not include some of the input information. While both cases tend to be catastrophic errors undermining user trust, annotated data with these types of pathologies is extremely scarce and is limited to a few high-resource languages. In this work, we release an annotated dataset for the hallucination and omission phenomena covering 18 translation directions with varying resource levels and scripts. Our annotation covers different levels of partial and full hallucinations as well as omissions both at the sentence and at the word level. Additionally, we revisit previous methods for hallucination and omission detection, show that conclusions made based on a single language pair largely do not hold for a large-scale evaluation, and establish new solid baselines.
[ "Dale, David", "Voita, Elena", "Lam, Janice", "Hansanti, Prangthip", "Ropers, Christophe", "Kalbassi, Elahe", "Gao, Cynthia", "Barrault, Loic", "Costa-juss{\\`a}, Marta" ]
HalOmi: A Manually Annotated Benchmark for Multilingual Hallucination and Omission Detection in Machine Translation
emnlp-main.42
2305.11746
[ "https://github.com/facebookresearch/stopes" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.emnlp-main.43.bib
https://aclanthology.org/2023.emnlp-main.43/
@inproceedings{he-etal-2023-gradient, title = "Gradient-based Gradual Pruning for Language-Specific Multilingual Neural Machine Translation", author = "He, Dan and Pham, Minh-Quang and Ha, Thanh-Le and Turchi, Marco", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.emnlp-main.43", doi = "10.18653/v1/2023.emnlp-main.43", pages = "654--670", abstract = "Multilingual neural machine translation (MNMT) offers the convenience of translating between multiple languages with a single model. However, MNMT often suffers from performance degradation in high-resource languages compared to bilingual counterparts. This degradation is commonly attributed to parameter interference, which occurs when parameters are fully shared across all language pairs. In this work, to tackle this issue we propose a gradient-based gradual pruning technique for MNMT. Our approach aims to identify an optimal sub-network for each language pair within the multilingual model by leveraging gradient-based information as pruning criterion and gradually increasing the pruning ratio as schedule. Our approach allows for partial parameter sharing across language pairs to alleviate interference, and each pair preserves its unique parameters to capture language-specific information. Comprehensive experiments on IWSLT and WMT datasets show that our approach yields a notable performance gain on both datasets.", }
Multilingual neural machine translation (MNMT) offers the convenience of translating between multiple languages with a single model. However, MNMT often suffers from performance degradation in high-resource languages compared to bilingual counterparts. This degradation is commonly attributed to parameter interference, which occurs when parameters are fully shared across all language pairs. In this work, to tackle this issue we propose a gradient-based gradual pruning technique for MNMT. Our approach aims to identify an optimal sub-network for each language pair within the multilingual model by leveraging gradient-based information as pruning criterion and gradually increasing the pruning ratio as schedule. Our approach allows for partial parameter sharing across language pairs to alleviate interference, and each pair preserves its unique parameters to capture language-specific information. Comprehensive experiments on IWSLT and WMT datasets show that our approach yields a notable performance gain on both datasets.
[ "He, Dan", "Pham, Minh-Quang", "Ha, Thanh-Le", "Turchi, Marco" ]
Gradient-based Gradual Pruning for Language-Specific Multilingual Neural Machine Translation
emnlp-main.43
null
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.emnlp-main.44.bib
https://aclanthology.org/2023.emnlp-main.44/
@inproceedings{whitehouse-etal-2023-llm, title = "{LLM}-powered Data Augmentation for Enhanced Cross-lingual Performance", author = "Whitehouse, Chenxi and Choudhury, Monojit and Aji, Alham Fikri", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.emnlp-main.44", doi = "10.18653/v1/2023.emnlp-main.44", pages = "671--686", abstract = "This paper explores the potential of leveraging Large Language Models (LLMs) for data augmentation in multilingual commonsense reasoning datasets where the available training data is extremely limited. To achieve this, we utilise several LLMs, namely Dolly-v2, StableVicuna, ChatGPT, and GPT-4, to augment three datasets: XCOPA, XWinograd, and XStoryCloze. Subsequently, we evaluate the effectiveness of fine-tuning smaller multilingual models, mBERT and XLMR, using the synthesised data. We compare the performance of training with data generated in English and target languages, as well as translated English-generated data, revealing the overall advantages of incorporating data generated by LLMs, e.g. a notable 13.4 accuracy score improvement for the best case. Furthermore, we conduct a human evaluation by asking native speakers to assess the naturalness and logical coherence of the generated examples across different languages. The results of the evaluation indicate that LLMs such as ChatGPT and GPT-4 excel at producing natural and coherent text in most languages, however, they struggle to generate meaningful text in certain languages like Tamil. We also observe that ChatGPT falls short in generating plausible alternatives compared to the original dataset, whereas examples from GPT-4 exhibit competitive logical consistency.", }
This paper explores the potential of leveraging Large Language Models (LLMs) for data augmentation in multilingual commonsense reasoning datasets where the available training data is extremely limited. To achieve this, we utilise several LLMs, namely Dolly-v2, StableVicuna, ChatGPT, and GPT-4, to augment three datasets: XCOPA, XWinograd, and XStoryCloze. Subsequently, we evaluate the effectiveness of fine-tuning smaller multilingual models, mBERT and XLMR, using the synthesised data. We compare the performance of training with data generated in English and target languages, as well as translated English-generated data, revealing the overall advantages of incorporating data generated by LLMs, e.g. a notable 13.4 accuracy score improvement for the best case. Furthermore, we conduct a human evaluation by asking native speakers to assess the naturalness and logical coherence of the generated examples across different languages. The results of the evaluation indicate that LLMs such as ChatGPT and GPT-4 excel at producing natural and coherent text in most languages, however, they struggle to generate meaningful text in certain languages like Tamil. We also observe that ChatGPT falls short in generating plausible alternatives compared to the original dataset, whereas examples from GPT-4 exhibit competitive logical consistency.
[ "Whitehouse, Chenxi", "Choudhury, Monojit", "Aji, Alham Fikri" ]
LLM-powered Data Augmentation for Enhanced Cross-lingual Performance
emnlp-main.44
2305.14288
[ "https://github.com/mbzuai-nlp/gen-X" ]
https://huggingface.co./papers/2305.14288
2
0
0
3
[]
[ "coref-data/gen_winograd_raw" ]
[]
1
Poster
https://aclanthology.org/2023.emnlp-main.45.bib
https://aclanthology.org/2023.emnlp-main.45/
@inproceedings{wang-etal-2023-prompt-based, title = "Prompt-based Logical Semantics Enhancement for Implicit Discourse Relation Recognition", author = "Wang, Chenxu and Jian, Ping and Huang, Mu", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.emnlp-main.45", doi = "10.18653/v1/2023.emnlp-main.45", pages = "687--699", abstract = "Implicit Discourse Relation Recognition (IDRR), which infers discourse relations without the help of explicit connectives, is still a crucial and challenging task for discourse parsing. Recent works tend to exploit the hierarchical structure information from the annotated senses, which demonstrate enhanced discourse relation representations can be obtained by integrating sense hierarchy. Nevertheless, the performance and robustness for IDRR are significantly constrained by the availability of annotated data. Fortunately, there is a wealth of unannotated utterances with explicit connectives, that can be utilized to acquire enriched discourse relation features. In light of such motivation, we propose a $\textbf{P}$rompt-based $\textbf{L}$ogical $\textbf{S}$emantics $\textbf{E}$nhancement (PLSE) method for IDRR. Essentially, our method seamlessly injects knowledge relevant to discourse relation into pre-trained language models through prompt-based connective prediction. Furthermore, considering the prompt-based connective prediction exhibits local dependencies due to the deficiency of masked language model (MLM) in capturing global semantics, we design a novel self-supervised learning objective based on mutual information maximization to derive enhanced representations of logical semantics for IDRR. Experimental results on PDTB 2.0 and CoNLL16 datasets demonstrate that our method achieves outstanding and consistent performance against the current state-of-the-art models.", }
Implicit Discourse Relation Recognition (IDRR), which infers discourse relations without the help of explicit connectives, is still a crucial and challenging task for discourse parsing. Recent works tend to exploit the hierarchical structure information from the annotated senses, which demonstrate enhanced discourse relation representations can be obtained by integrating sense hierarchy. Nevertheless, the performance and robustness for IDRR are significantly constrained by the availability of annotated data. Fortunately, there is a wealth of unannotated utterances with explicit connectives, that can be utilized to acquire enriched discourse relation features. In light of such motivation, we propose a $\textbf{P}$rompt-based $\textbf{L}$ogical $\textbf{S}$emantics $\textbf{E}$nhancement (PLSE) method for IDRR. Essentially, our method seamlessly injects knowledge relevant to discourse relation into pre-trained language models through prompt-based connective prediction. Furthermore, considering the prompt-based connective prediction exhibits local dependencies due to the deficiency of masked language model (MLM) in capturing global semantics, we design a novel self-supervised learning objective based on mutual information maximization to derive enhanced representations of logical semantics for IDRR. Experimental results on PDTB 2.0 and CoNLL16 datasets demonstrate that our method achieves outstanding and consistent performance against the current state-of-the-art models.
[ "Wang, Chenxu", "Jian, Ping", "Huang, Mu" ]
Prompt-based Logical Semantics Enhancement for Implicit Discourse Relation Recognition
emnlp-main.45
2311.00367
[ "https://github.com/lalalamdbf/plse_idrr" ]
-1
-1
-1
-1
[]
[]
[]
0
Oral
https://aclanthology.org/2023.emnlp-main.46.bib
https://aclanthology.org/2023.emnlp-main.46/
@inproceedings{chung-yu-2023-vlis, title = "{VLIS}: Unimodal Language Models Guide Multimodal Language Generation", author = "Chung, Jiwan and Yu, Youngjae", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.emnlp-main.46", doi = "10.18653/v1/2023.emnlp-main.46", pages = "700--721", abstract = "Multimodal language generation, which leverages the synergy of language and vision, is a rapidly expanding field. However, existing vision-language models face challenges in tasks that require complex linguistic understanding. To address this issue, we introduce Visual-Language models as Importance Sampling weights (VLIS), a novel framework that combines the visual conditioning capability of vision-language models with the language understanding of unimodal text-only language models without further training. It extracts pointwise mutual information of each image and text from a visual-language model and uses the value as an importance sampling weight to adjust the token likelihood from a text-only model. VLIS improves vision-language models on diverse tasks, including commonsense understanding (WHOOPS, OK-VQA, and ScienceQA) and complex text generation (Concadia, Image Paragraph Captioning, and ROCStories). Our results suggest that VLIS represents a promising new direction for multimodal language generation.", }
Multimodal language generation, which leverages the synergy of language and vision, is a rapidly expanding field. However, existing vision-language models face challenges in tasks that require complex linguistic understanding. To address this issue, we introduce Visual-Language models as Importance Sampling weights (VLIS), a novel framework that combines the visual conditioning capability of vision-language models with the language understanding of unimodal text-only language models without further training. It extracts pointwise mutual information of each image and text from a visual-language model and uses the value as an importance sampling weight to adjust the token likelihood from a text-only model. VLIS improves vision-language models on diverse tasks, including commonsense understanding (WHOOPS, OK-VQA, and ScienceQA) and complex text generation (Concadia, Image Paragraph Captioning, and ROCStories). Our results suggest that VLIS represents a promising new direction for multimodal language generation.
[ "Chung, Jiwan", "Yu, Youngjae" ]
VLIS: Unimodal Language Models Guide Multimodal Language Generation
emnlp-main.46
2310.09767
[ "https://github.com/jiwanchung/vlis" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.emnlp-main.47.bib
https://aclanthology.org/2023.emnlp-main.47/
@inproceedings{suresh-etal-2023-conceptual, title = "Conceptual structure coheres in human cognition but not in large language models", author = "Suresh, Siddharth and Mukherjee, Kushin and Yu, Xizheng and Huang, Wei-Chun and Padua, Lisa and Rogers, Timothy", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.emnlp-main.47", doi = "10.18653/v1/2023.emnlp-main.47", pages = "722--738", abstract = "Neural network models of language have long been used as a tool for developing hypotheses about conceptual representation in the mind and brain. For many years, such use involved extracting vector-space representations of words and using distances among these to predict or understand human behavior in various semantic tasks. In contemporary language models, however, it is possible to interrogate the latent structure of conceptual representations using methods nearly identical to those commonly used with human participants. The current work uses three common techniques borrowed from cognitive psychology to estimate and compare lexical-semantic structure in both humans and a well-known large language model, the DaVinci variant of GPT-3. In humans, we show that conceptual structure is robust to differences in culture, language, and method of estimation. Structures estimated from the LLM behavior, while individually fairly consistent with those estimated from human behavior, depend much more upon the particular task used to generate behavior responses{--}responses generated by the very same model in the three tasks yield estimates of conceptual structure that cohere less with one another than do human structure estimates. The results suggest one important way that knowledge inhering in contemporary LLMs can differ from human cognition.", }
Neural network models of language have long been used as a tool for developing hypotheses about conceptual representation in the mind and brain. For many years, such use involved extracting vector-space representations of words and using distances among these to predict or understand human behavior in various semantic tasks. In contemporary language models, however, it is possible to interrogate the latent structure of conceptual representations using methods nearly identical to those commonly used with human participants. The current work uses three common techniques borrowed from cognitive psychology to estimate and compare lexical-semantic structure in both humans and a well-known large language model, the DaVinci variant of GPT-3. In humans, we show that conceptual structure is robust to differences in culture, language, and method of estimation. Structures estimated from the LLM behavior, while individually fairly consistent with those estimated from human behavior, depend much more upon the particular task used to generate behavior responses{--}responses generated by the very same model in the three tasks yield estimates of conceptual structure that cohere less with one another than do human structure estimates. The results suggest one important way that knowledge inhering in contemporary LLMs can differ from human cognition.
[ "Suresh, Siddharth", "Mukherjee, Kushin", "Yu, Xizheng", "Huang, Wei-Chun", "Padua, Lisa", "Rogers, Timothy" ]
Conceptual structure coheres in human cognition but not in large language models
emnlp-main.47
2304.02754
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.emnlp-main.48.bib
https://aclanthology.org/2023.emnlp-main.48/
@inproceedings{feng-etal-2023-towards, title = "Towards {LLM}-driven Dialogue State Tracking", author = "Feng, Yujie and Lu, Zexin and Liu, Bo and Zhan, Liming and Wu, Xiao-Ming", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.emnlp-main.48", doi = "10.18653/v1/2023.emnlp-main.48", pages = "739--755", abstract = "Dialogue State Tracking (DST) is of paramount importance in ensuring accurate tracking of user goals and system actions within task-oriented dialogue systems. The emergence of large language models (LLMs) such as GPT3 and ChatGPT has sparked considerable interest in assessing their efficacy across diverse applications. In this study, we conduct an initial examination of ChatGPT{'}s capabilities in DST. Our evaluation uncovers the exceptional performance of ChatGPT in this task, offering valuable insights to researchers regarding its capabilities and providing useful directions for designing and enhancing dialogue systems. Despite its impressive performance, ChatGPT has significant limitations including its closed-source nature, request restrictions, raising data privacy concerns, and lacking local deployment capabilities. To address these concerns, we present LDST, an LLM-driven DST framework based on smaller, open-source foundation models. By utilizing a novel domain-slot instruction tuning method, LDST achieves performance on par with ChatGPT. Comprehensive evaluations across three distinct experimental settings, we find that LDST exhibits remarkable performance improvements in both zero-shot and few-shot setting compared to previous SOTA methods. The source code is provided for reproducibility.", }
Dialogue State Tracking (DST) is of paramount importance in ensuring accurate tracking of user goals and system actions within task-oriented dialogue systems. The emergence of large language models (LLMs) such as GPT3 and ChatGPT has sparked considerable interest in assessing their efficacy across diverse applications. In this study, we conduct an initial examination of ChatGPT{'}s capabilities in DST. Our evaluation uncovers the exceptional performance of ChatGPT in this task, offering valuable insights to researchers regarding its capabilities and providing useful directions for designing and enhancing dialogue systems. Despite its impressive performance, ChatGPT has significant limitations including its closed-source nature, request restrictions, raising data privacy concerns, and lacking local deployment capabilities. To address these concerns, we present LDST, an LLM-driven DST framework based on smaller, open-source foundation models. By utilizing a novel domain-slot instruction tuning method, LDST achieves performance on par with ChatGPT. Comprehensive evaluations across three distinct experimental settings, we find that LDST exhibits remarkable performance improvements in both zero-shot and few-shot setting compared to previous SOTA methods. The source code is provided for reproducibility.
[ "Feng, Yujie", "Lu, Zexin", "Liu, Bo", "Zhan, Liming", "Wu, Xiao-Ming" ]
Towards LLM-driven Dialogue State Tracking
emnlp-main.48
2310.14970
[ "https://github.com/woodscene/ldst" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.emnlp-main.49.bib
https://aclanthology.org/2023.emnlp-main.49/
@inproceedings{zhang-etal-2023-learning-language, title = "Learning Language-guided Adaptive Hyper-modality Representation for Multimodal Sentiment Analysis", author = "Zhang, Haoyu and Wang, Yu and Yin, Guanghao and Liu, Kejun and Liu, Yuanyuan and Yu, Tianshu", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.emnlp-main.49", doi = "10.18653/v1/2023.emnlp-main.49", pages = "756--767", abstract = "Though Multimodal Sentiment Analysis (MSA) proves effective by utilizing rich information from multiple sources (*e.g.,* language, video, and audio), the potential sentiment-irrelevant and conflicting information across modalities may hinder the performance from being further improved. To alleviate this, we present Adaptive Language-guided Multimodal Transformer (ALMT), which incorporates an Adaptive Hyper-modality Learning (AHL) module to learn an irrelevance/conflict-suppressing representation from visual and audio features under the guidance of language features at different scales. With the obtained hyper-modality representation, the model can obtain a complementary and joint representation through multimodal fusion for effective MSA. In practice, ALMT achieves state-of-the-art performance on several popular datasets (*e.g.,* MOSI, MOSEI and CH-SIMS) and an abundance of ablation demonstrates the validity and necessity of our irrelevance/conflict suppression mechanism.", }
Though Multimodal Sentiment Analysis (MSA) proves effective by utilizing rich information from multiple sources (*e.g.,* language, video, and audio), the potential sentiment-irrelevant and conflicting information across modalities may hinder the performance from being further improved. To alleviate this, we present Adaptive Language-guided Multimodal Transformer (ALMT), which incorporates an Adaptive Hyper-modality Learning (AHL) module to learn an irrelevance/conflict-suppressing representation from visual and audio features under the guidance of language features at different scales. With the obtained hyper-modality representation, the model can obtain a complementary and joint representation through multimodal fusion for effective MSA. In practice, ALMT achieves state-of-the-art performance on several popular datasets (*e.g.,* MOSI, MOSEI and CH-SIMS) and an abundance of ablation demonstrates the validity and necessity of our irrelevance/conflict suppression mechanism.
[ "Zhang, Haoyu", "Wang, Yu", "Yin, Guanghao", "Liu, Kejun", "Liu, Yuanyuan", "Yu, Tianshu" ]
Learning Language-guided Adaptive Hyper-modality Representation for Multimodal Sentiment Analysis
emnlp-main.49
2310.05804
[ "https://github.com/Haoyu-ha/ALMT" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.emnlp-main.50.bib
https://aclanthology.org/2023.emnlp-main.50/
@inproceedings{pantazopoulos-etal-2023-multitask, title = "Multitask Multimodal Prompted Training for Interactive Embodied Task Completion", author = "Pantazopoulos, Georgios and Nikandrou, Malvina and Parekh, Amit and Hemanthage, Bhathiya and Eshghi, Arash and Konstas, Ioannis and Rieser, Verena and Lemon, Oliver and Suglia, Alessandro", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.emnlp-main.50", doi = "10.18653/v1/2023.emnlp-main.50", pages = "768--789", abstract = "Interactive and embodied tasks pose at least two fundamental challenges to existing Vision {\&} Language (VL) models, including 1) grounding language in trajectories of actions and observations, and 2) referential disambiguation. To tackle these challenges, we propose an Embodied MultiModal Agent (EMMA): a unified encoder-decoder model that reasons over images and trajectories, and casts action prediction as multimodal text generation. By unifying all tasks as text generation, EMMA learns a language of actions which facilitates transfer across tasks. Different to previous modular approaches with independently trained components, we use a single multitask model where each task contributes to goal completion. EMMA performs on par with similar models on several VL benchmarks and sets a new state-of-the-art performance (36.81{\%} success rate) on the Dialog-guided Task Completion (DTC), a benchmark to evaluate dialog-guided agents in the Alexa Arena.", }
Interactive and embodied tasks pose at least two fundamental challenges to existing Vision {\&} Language (VL) models, including 1) grounding language in trajectories of actions and observations, and 2) referential disambiguation. To tackle these challenges, we propose an Embodied MultiModal Agent (EMMA): a unified encoder-decoder model that reasons over images and trajectories, and casts action prediction as multimodal text generation. By unifying all tasks as text generation, EMMA learns a language of actions which facilitates transfer across tasks. Different to previous modular approaches with independently trained components, we use a single multitask model where each task contributes to goal completion. EMMA performs on par with similar models on several VL benchmarks and sets a new state-of-the-art performance (36.81{\%} success rate) on the Dialog-guided Task Completion (DTC), a benchmark to evaluate dialog-guided agents in the Alexa Arena.
[ "Pantazopoulos, Georgios", "Nik", "rou, Malvina", "Parekh, Amit", "Hemanthage, Bhathiya", "Eshghi, Arash", "Konstas, Ioannis", "Rieser, Verena", "Lemon, Oliver", "Suglia, Aless", "ro" ]
Multitask Multimodal Prompted Training for Interactive Embodied Task Completion
emnlp-main.50
2311.04067
[ "" ]
https://huggingface.co./papers/2311.04067
1
1
0
9
[]
[]
[]
1
Poster
https://aclanthology.org/2023.emnlp-main.51.bib
https://aclanthology.org/2023.emnlp-main.51/
@inproceedings{liu-etal-2023-afraid, title = "We{'}re Afraid Language Models Aren{'}t Modeling Ambiguity", author = "Liu, Alisa and Wu, Zhaofeng and Michael, Julian and Suhr, Alane and West, Peter and Koller, Alexander and Swayamdipta, Swabha and Smith, Noah and Choi, Yejin", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.emnlp-main.51", doi = "10.18653/v1/2023.emnlp-main.51", pages = "790--807", abstract = "Ambiguity is an intrinsic feature of natural language. Managing ambiguity is a key part of human language understanding, allowing us to anticipate misunderstanding as communicators and revise our interpretations as listeners. As language models are increasingly employed as dialogue interfaces and writing aids, handling ambiguous language is critical to their success. We capture ambiguity in a sentence through its effect on entailment relations with another sentence, and collect AmbiEnt, a linguist-annotated benchmark of 1,645 examples with diverse kinds of ambiguity. We design a suite of tests based on AmbiEnt, presenting the first evaluation of pretrained LMs to recognize ambiguity and disentangle possible meanings. We find that the task remains extremely challenging, including for GPT-4, whose generated disambiguations are considered correct only 32{\%} of the time in crowdworker evaluation, compared to 90{\%} for disambiguations in our dataset. Finally, to illustrate the value of ambiguity-sensitive tools, we show that a multilabel NLI model can flag political claims in the wild that are misleading due to ambiguity. We encourage the field to rediscover the importance of ambiguity for NLP.", }
Ambiguity is an intrinsic feature of natural language. Managing ambiguity is a key part of human language understanding, allowing us to anticipate misunderstanding as communicators and revise our interpretations as listeners. As language models are increasingly employed as dialogue interfaces and writing aids, handling ambiguous language is critical to their success. We capture ambiguity in a sentence through its effect on entailment relations with another sentence, and collect AmbiEnt, a linguist-annotated benchmark of 1,645 examples with diverse kinds of ambiguity. We design a suite of tests based on AmbiEnt, presenting the first evaluation of pretrained LMs to recognize ambiguity and disentangle possible meanings. We find that the task remains extremely challenging, including for GPT-4, whose generated disambiguations are considered correct only 32{\%} of the time in crowdworker evaluation, compared to 90{\%} for disambiguations in our dataset. Finally, to illustrate the value of ambiguity-sensitive tools, we show that a multilabel NLI model can flag political claims in the wild that are misleading due to ambiguity. We encourage the field to rediscover the importance of ambiguity for NLP.
[ "Liu, Alisa", "Wu, Zhaofeng", "Michael, Julian", "Suhr, Alane", "West, Peter", "Koller, Alex", "er", "Swayamdipta, Swabha", "Smith, Noah", "Choi, Yejin" ]
We're Afraid Language Models Aren't Modeling Ambiguity
emnlp-main.51
2304.14399
[ "https://github.com/alisawuffles/ambient" ]
https://huggingface.co./papers/2304.14399
1
0
0
9
[]
[ "metaeval/ambient" ]
[]
1
Poster
README.md exists but content is empty.
Downloads last month
38