Datasets:

bibtex_url
stringlengths
41
53
proceedings
stringlengths
38
50
bibtext
stringlengths
535
2.8k
abstract
stringlengths
0
2.04k
authors
sequencelengths
1
31
title
stringlengths
19
178
id
stringlengths
7
19
type
stringclasses
1 value
arxiv_id
stringlengths
0
10
GitHub
sequencelengths
1
1
paper_page
stringclasses
124 values
n_linked_authors
int64
-1
7
upvotes
int64
-1
79
num_comments
int64
-1
4
n_authors
int64
-1
22
paper_page_exists_pre_conf
int64
0
1
Models
sequencelengths
0
55
Datasets
sequencelengths
0
46
Spaces
sequencelengths
0
82
https://aclanthology.org/2024.lrec-main.1.bib
https://aclanthology.org/2024.lrec-main.1/
@inproceedings{ma-etal-2024-3am, title = "3{AM}: An Ambiguity-Aware Multi-Modal Machine Translation Dataset", author = "Ma, Xinyu and Liu, Xuebo and Wong, Derek F. and Rao, Jun and Li, Bei and Ding, Liang and Chao, Lidia S. and Tao, Dacheng and Zhang, Min", editor = "Calzolari, Nicoletta and Kan, Min-Yen and Hoste, Veronique and Lenci, Alessandro and Sakti, Sakriani and Xue, Nianwen", booktitle = "Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)", month = may, year = "2024", address = "Torino, Italia", publisher = "ELRA and ICCL", url = "https://aclanthology.org/2024.lrec-main.1", pages = "1--13", abstract = "Multimodal machine translation (MMT) is a challenging task that seeks to improve translation quality by incorporating visual information. However, recent studies have indicated that the visual information provided by existing MMT datasets is insufficient, causing models to disregard it and overestimate their capabilities. This issue presents a significant obstacle to the development of MMT research. This paper presents a novel solution to this issue by introducing 3AM, an ambiguity-aware MMT dataset comprising 26,000 parallel sentence pairs in English and Chinese, each with corresponding images. Our dataset is specifically designed to include more ambiguity and a greater variety of both captions and images than other MMT datasets. We utilize a word sense disambiguation model to select ambiguous data from vision-and-language datasets, resulting in a more challenging dataset. We further benchmark several state-of-the-art MMT models on our proposed dataset. Experimental results show that MMT models trained on our dataset exhibit a greater ability to exploit visual information than those trained on other MMT datasets. Our work provides a valuable resource for researchers in the field of multimodal learning and encourages further exploration in this area. The data, code and scripts are freely available at https://github.com/MaxyLee/3AM.", }
Multimodal machine translation (MMT) is a challenging task that seeks to improve translation quality by incorporating visual information. However, recent studies have indicated that the visual information provided by existing MMT datasets is insufficient, causing models to disregard it and overestimate their capabilities. This issue presents a significant obstacle to the development of MMT research. This paper presents a novel solution to this issue by introducing 3AM, an ambiguity-aware MMT dataset comprising 26,000 parallel sentence pairs in English and Chinese, each with corresponding images. Our dataset is specifically designed to include more ambiguity and a greater variety of both captions and images than other MMT datasets. We utilize a word sense disambiguation model to select ambiguous data from vision-and-language datasets, resulting in a more challenging dataset. We further benchmark several state-of-the-art MMT models on our proposed dataset. Experimental results show that MMT models trained on our dataset exhibit a greater ability to exploit visual information than those trained on other MMT datasets. Our work provides a valuable resource for researchers in the field of multimodal learning and encourages further exploration in this area. The data, code and scripts are freely available at https://github.com/MaxyLee/3AM.
[ "Ma, Xinyu", "Liu, Xuebo", "Wong, Derek F.", "Rao, Jun", "Li, Bei", "Ding, Liang", "Chao, Lidia S.", "Tao, Dacheng", "Zhang, Min" ]
3AM: An Ambiguity-Aware Multi-Modal Machine Translation Dataset
lrec-main.1
Poster
2404.18413
[ "https://github.com/maxylee/3am" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.lrec-main.2.bib
https://aclanthology.org/2024.lrec-main.2/
@inproceedings{bannour-etal-2024-benchmark, title = "A Benchmark Evaluation of Clinical Named Entity Recognition in {F}rench", author = "Bannour, Nesrine and Servan, Christophe and N{\'e}v{\'e}ol, Aur{\'e}lie and Tannier, Xavier", editor = "Calzolari, Nicoletta and Kan, Min-Yen and Hoste, Veronique and Lenci, Alessandro and Sakti, Sakriani and Xue, Nianwen", booktitle = "Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)", month = may, year = "2024", address = "Torino, Italia", publisher = "ELRA and ICCL", url = "https://aclanthology.org/2024.lrec-main.2", pages = "14--21", abstract = "Background: Transformer-based language models have shown strong performance on many Natural Language Processing (NLP) tasks. Masked Language Models (MLMs) attract sustained interest because they can be adapted to different languages and sub-domains through training or fine-tuning on specific corpora while remaining lighter than modern Large Language Models (MLMs). Recently, several MLMs have been released for the biomedical domain in French, and experiments suggest that they outperform standard French counterparts. However, no systematic evaluation comparing all models on the same corpora is available. Objective: This paper presents an evaluation of masked language models for biomedical French on the task of clinical named entity recognition. Material and methods: We evaluate biomedical models CamemBERT-bio and DrBERT and compare them to standard French models CamemBERT, FlauBERT and FrAlBERT as well as multilingual mBERT using three publically available corpora for clinical named entity recognition in French. The evaluation set-up relies on gold-standard corpora as released by the corpus developers. Results: Results suggest that CamemBERT-bio outperforms DrBERT consistently while FlauBERT offers competitive performance and FrAlBERT achieves the lowest carbon footprint. Conclusion: This is the first benchmark evaluation of biomedical masked language models for French clinical entity recognition that compares model performance consistently on nested entity recognition using metrics covering performance and environmental impact.", }
Background: Transformer-based language models have shown strong performance on many Natural Language Processing (NLP) tasks. Masked Language Models (MLMs) attract sustained interest because they can be adapted to different languages and sub-domains through training or fine-tuning on specific corpora while remaining lighter than modern Large Language Models (MLMs). Recently, several MLMs have been released for the biomedical domain in French, and experiments suggest that they outperform standard French counterparts. However, no systematic evaluation comparing all models on the same corpora is available. Objective: This paper presents an evaluation of masked language models for biomedical French on the task of clinical named entity recognition. Material and methods: We evaluate biomedical models CamemBERT-bio and DrBERT and compare them to standard French models CamemBERT, FlauBERT and FrAlBERT as well as multilingual mBERT using three publically available corpora for clinical named entity recognition in French. The evaluation set-up relies on gold-standard corpora as released by the corpus developers. Results: Results suggest that CamemBERT-bio outperforms DrBERT consistently while FlauBERT offers competitive performance and FrAlBERT achieves the lowest carbon footprint. Conclusion: This is the first benchmark evaluation of biomedical masked language models for French clinical entity recognition that compares model performance consistently on nested entity recognition using metrics covering performance and environmental impact.
[ "Bannour, Nesrine", "Servan, Christophe", "N{\\'e}v{\\'e}ol, Aur{\\'e}lie", "Tannier, Xavier" ]
A Benchmark Evaluation of Clinical Named Entity Recognition in French
lrec-main.2
Poster
2403.19726
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.lrec-main.3.bib
https://aclanthology.org/2024.lrec-main.3/
@inproceedings{nevens-etal-2024-benchmark, title = "A Benchmark for Recipe Understanding in Artificial Agents", author = "Nevens, Jens and de Haes, Robin and Ringe, Rachel and Pomarlan, Mihai and Porzel, Robert and Beuls, Katrien and van Eecke, Paul", editor = "Calzolari, Nicoletta and Kan, Min-Yen and Hoste, Veronique and Lenci, Alessandro and Sakti, Sakriani and Xue, Nianwen", booktitle = "Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)", month = may, year = "2024", address = "Torino, Italia", publisher = "ELRA and ICCL", url = "https://aclanthology.org/2024.lrec-main.3", pages = "22--42", abstract = "This paper introduces a novel benchmark that has been designed as a test bed for evaluating whether artificial agents are able to understand how to perform everyday activities, with a focus on the cooking domain. Understanding how to cook recipes is a highly challenging endeavour due to the underspecified and grounded nature of recipe texts, combined with the fact that recipe execution is a knowledge-intensive and precise activity. The benchmark comprises a corpus of recipes, a procedural semantic representation language of cooking actions, qualitative and quantitative kitchen simulators, and a standardised evaluation procedure. Concretely, the benchmark task consists in mapping a recipe formulated in natural language to a set of cooking actions that is precise enough to be executed in the simulated kitchen and yields the desired dish. To overcome the challenges inherent to recipe execution, this mapping process needs to incorporate reasoning over the recipe text, the state of the simulated kitchen environment, common-sense knowledge, knowledge of the cooking domain, and the action space of a virtual or robotic chef. This benchmark thereby addresses the growing interest in human-centric systems that combine natural language processing and situated reasoning to perform everyday activities.", }
This paper introduces a novel benchmark that has been designed as a test bed for evaluating whether artificial agents are able to understand how to perform everyday activities, with a focus on the cooking domain. Understanding how to cook recipes is a highly challenging endeavour due to the underspecified and grounded nature of recipe texts, combined with the fact that recipe execution is a knowledge-intensive and precise activity. The benchmark comprises a corpus of recipes, a procedural semantic representation language of cooking actions, qualitative and quantitative kitchen simulators, and a standardised evaluation procedure. Concretely, the benchmark task consists in mapping a recipe formulated in natural language to a set of cooking actions that is precise enough to be executed in the simulated kitchen and yields the desired dish. To overcome the challenges inherent to recipe execution, this mapping process needs to incorporate reasoning over the recipe text, the state of the simulated kitchen environment, common-sense knowledge, knowledge of the cooking domain, and the action space of a virtual or robotic chef. This benchmark thereby addresses the growing interest in human-centric systems that combine natural language processing and situated reasoning to perform everyday activities.
[ "Nevens, Jens", "de Haes, Robin", "Ringe, Rachel", "Pomarlan, Mihai", "Porzel, Robert", "Beuls, Katrien", "van Eecke, Paul" ]
A Benchmark for Recipe Understanding in Artificial Agents
lrec-main.3
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.lrec-main.4.bib
https://aclanthology.org/2024.lrec-main.4/
@inproceedings{kim-etal-2024-able, title = "{ABLE}: Agency-{B}e{L}iefs Embedding to Address Stereotypical Bias through Awareness Instead of Obliviousness", author = "Kim, Michelle YoungJin and Kim, Junghwan and Johnson, Kristen", editor = "Calzolari, Nicoletta and Kan, Min-Yen and Hoste, Veronique and Lenci, Alessandro and Sakti, Sakriani and Xue, Nianwen", booktitle = "Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)", month = may, year = "2024", address = "Torino, Italia", publisher = "ELRA and ICCL", url = "https://aclanthology.org/2024.lrec-main.4", pages = "43--56", abstract = "Natural Language Processing (NLP) models tend to inherit and amplify stereotypical biases present in their training data, leading to harmful societal consequences. Current efforts to rectify these biases typically revolve around making models oblivious to bias, which is at odds with the idea that humans require increased awareness to tackle these biases better. This prompts a fundamental research question: are bias-oblivious models the only viable solution to combat stereotypical biases? This paper answers this question by proposing the Agency-BeLiefs Embedding (ABLE) model, a novel approach that actively encodes stereotypical biases into the embedding space. ABLE draws upon social psychological theory to acquire and represent stereotypical biases in the form of agency and belief scores rather than directly representing stereotyped groups. Our experimental results showcase ABLE{'}s effectiveness in learning agency and belief stereotypes while preserving the language model{'}s proficiency. Furthermore, we underscore the practical significance of incorporating stereotypes within the ABLE model by demonstrating its utility in various downstream tasks. Our approach exemplifies the potential benefits of addressing bias through awareness, as opposed to the prevailing approach of mitigating bias through obliviousness.", }
Natural Language Processing (NLP) models tend to inherit and amplify stereotypical biases present in their training data, leading to harmful societal consequences. Current efforts to rectify these biases typically revolve around making models oblivious to bias, which is at odds with the idea that humans require increased awareness to tackle these biases better. This prompts a fundamental research question: are bias-oblivious models the only viable solution to combat stereotypical biases? This paper answers this question by proposing the Agency-BeLiefs Embedding (ABLE) model, a novel approach that actively encodes stereotypical biases into the embedding space. ABLE draws upon social psychological theory to acquire and represent stereotypical biases in the form of agency and belief scores rather than directly representing stereotyped groups. Our experimental results showcase ABLE{'}s effectiveness in learning agency and belief stereotypes while preserving the language model{'}s proficiency. Furthermore, we underscore the practical significance of incorporating stereotypes within the ABLE model by demonstrating its utility in various downstream tasks. Our approach exemplifies the potential benefits of addressing bias through awareness, as opposed to the prevailing approach of mitigating bias through obliviousness.
[ "Kim, Michelle YoungJin", "Kim, Junghwan", "Johnson, Kristen" ]
ABLE: Agency-BeLiefs Embedding to Address Stereotypical Bias through Awareness Instead of Obliviousness
lrec-main.4
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.lrec-main.5.bib
https://aclanthology.org/2024.lrec-main.5/
@inproceedings{takahashi-etal-2024-abstractive, title = "Abstractive Multi-Video Captioning: Benchmark Dataset Construction and Extensive Evaluation", author = "Takahashi, Rikito and Kiyomaru, Hirokazu and Chu, Chenhui and Kurohashi, Sadao", editor = "Calzolari, Nicoletta and Kan, Min-Yen and Hoste, Veronique and Lenci, Alessandro and Sakti, Sakriani and Xue, Nianwen", booktitle = "Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)", month = may, year = "2024", address = "Torino, Italia", publisher = "ELRA and ICCL", url = "https://aclanthology.org/2024.lrec-main.5", pages = "57--69", abstract = "This paper introduces a new task, abstractive multi-video captioning, which focuses on abstracting multiple videos with natural language. Unlike conventional video captioning tasks generating a specific caption for a video, our task generates an abstract caption of the shared content in a video group containing multiple videos. To address our task, models must learn to understand each video in detail and have strong abstraction abilities to find commonalities among videos. We construct a benchmark dataset for abstractive multi-video captioning named AbstrActs. AbstrActs contains 13.5k video groups and corresponding abstract captions. As abstractive multi-video captioning models, we explore two approaches: end-to-end and cascade. For evaluation, we proposed a new metric, CocoA, which can evaluate the model performance based on the abstractness of the generated captions. In experiments, we report the impact of the way of combining multiple video features, the overall model architecture, and the number of input videos.", }
This paper introduces a new task, abstractive multi-video captioning, which focuses on abstracting multiple videos with natural language. Unlike conventional video captioning tasks generating a specific caption for a video, our task generates an abstract caption of the shared content in a video group containing multiple videos. To address our task, models must learn to understand each video in detail and have strong abstraction abilities to find commonalities among videos. We construct a benchmark dataset for abstractive multi-video captioning named AbstrActs. AbstrActs contains 13.5k video groups and corresponding abstract captions. As abstractive multi-video captioning models, we explore two approaches: end-to-end and cascade. For evaluation, we proposed a new metric, CocoA, which can evaluate the model performance based on the abstractness of the generated captions. In experiments, we report the impact of the way of combining multiple video features, the overall model architecture, and the number of input videos.
[ "Takahashi, Rikito", "Kiyomaru, Hirokazu", "Chu, Chenhui", "Kurohashi, Sadao" ]
Abstractive Multi-Video Captioning: Benchmark Dataset Construction and Extensive Evaluation
lrec-main.5
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.lrec-main.6.bib
https://aclanthology.org/2024.lrec-main.6/
@inproceedings{wu-etal-2024-abstract, title = "Abstract-level Deductive Reasoning for Pre-trained Language Models", author = "Wu, Xin and Cai, Yi and Leung, Ho-fung", editor = "Calzolari, Nicoletta and Kan, Min-Yen and Hoste, Veronique and Lenci, Alessandro and Sakti, Sakriani and Xue, Nianwen", booktitle = "Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)", month = may, year = "2024", address = "Torino, Italia", publisher = "ELRA and ICCL", url = "https://aclanthology.org/2024.lrec-main.6", pages = "70--76", abstract = "Pre-trained Language Models have been shown to be able to emulate deductive reasoning in natural language. However, PLMs are easily affected by irrelevant information (e.g., entity) in instance-level proofs when learning deductive reasoning. To address this limitation, we propose an Abstract-level Deductive Reasoner (ADR). ADR is trained to predict the abstract reasoning proof of each sample, which guides PLMs to learn general reasoning patterns rather than instance-level knowledge. Experimental results demonstrate that ADR significantly reduces the impact of PLMs learning instance-level knowledge (over 70{\%}).", }
Pre-trained Language Models have been shown to be able to emulate deductive reasoning in natural language. However, PLMs are easily affected by irrelevant information (e.g., entity) in instance-level proofs when learning deductive reasoning. To address this limitation, we propose an Abstract-level Deductive Reasoner (ADR). ADR is trained to predict the abstract reasoning proof of each sample, which guides PLMs to learn general reasoning patterns rather than instance-level knowledge. Experimental results demonstrate that ADR significantly reduces the impact of PLMs learning instance-level knowledge (over 70{\%}).
[ "Wu, Xin", "Cai, Yi", "Leung, Ho-fung" ]
Abstract-level Deductive Reasoning for Pre-trained Language Models
lrec-main.6
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.lrec-main.7.bib
https://aclanthology.org/2024.lrec-main.7/
@inproceedings{kasai-etal-2024-call, title = "A Call for Clarity in Beam Search: How It Works and When It Stops", author = "Kasai, Jungo and Sakaguchi, Keisuke and Le Bras, Ronan and Radev, Dragomir and Choi, Yejin and Smith, Noah A.", editor = "Calzolari, Nicoletta and Kan, Min-Yen and Hoste, Veronique and Lenci, Alessandro and Sakti, Sakriani and Xue, Nianwen", booktitle = "Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)", month = may, year = "2024", address = "Torino, Italia", publisher = "ELRA and ICCL", url = "https://aclanthology.org/2024.lrec-main.7", pages = "77--90", abstract = "Text generation with beam search has proven successful in a wide range of applications. We point out that, though largely overlooked in the literature, the commonly-used implementation of beam decoding (e.g., Hugging Face Transformers and fairseq) uses a first come, first served heuristic: it keeps a set of already completed sequences over time steps and stops when the size of this set reaches the beam size. Based on this finding, we introduce a patience factor, a simple modification to this beam decoding implementation, that generalizes the stopping criterion and provides flexibility to the depth of search. Empirical results demonstrate that adjusting this patience factor improves decoding performance of strong pretrained models on news text summarization and machine translation over diverse language pairs, with a negligible inference slowdown. Our approach only modifies one line of code and can be thus readily incorporated in any implementation. Further, we find that different versions of beam decoding result in large performance differences in summarization, demonstrating the need for clarity in specifying the beam search implementation in research work. Our code will be available upon publication.", }
Text generation with beam search has proven successful in a wide range of applications. We point out that, though largely overlooked in the literature, the commonly-used implementation of beam decoding (e.g., Hugging Face Transformers and fairseq) uses a first come, first served heuristic: it keeps a set of already completed sequences over time steps and stops when the size of this set reaches the beam size. Based on this finding, we introduce a patience factor, a simple modification to this beam decoding implementation, that generalizes the stopping criterion and provides flexibility to the depth of search. Empirical results demonstrate that adjusting this patience factor improves decoding performance of strong pretrained models on news text summarization and machine translation over diverse language pairs, with a negligible inference slowdown. Our approach only modifies one line of code and can be thus readily incorporated in any implementation. Further, we find that different versions of beam decoding result in large performance differences in summarization, demonstrating the need for clarity in specifying the beam search implementation in research work. Our code will be available upon publication.
[ "Kasai, Jungo", "Sakaguchi, Keisuke", "Le Bras, Ronan", "Radev, Dragomir", "Choi, Yejin", "Smith, Noah A." ]
A Call for Clarity in Beam Search: How It Works and When It Stops
lrec-main.7
Poster
2204.05424
[ "https://github.com/jungokasai/beam_with_patience" ]
https://huggingface.co./papers/2204.05424
0
0
0
6
1
[]
[]
[ "ashhadahsan/whisperX", "katospiegel/amanu" ]
https://aclanthology.org/2024.lrec-main.8.bib
https://aclanthology.org/2024.lrec-main.8/
@inproceedings{odijk-kroon-2024-canonical, title = "A Canonical Form for Flexible Multiword Expressions", author = "Odijk, Jan and Kroon, Martin", editor = "Calzolari, Nicoletta and Kan, Min-Yen and Hoste, Veronique and Lenci, Alessandro and Sakti, Sakriani and Xue, Nianwen", booktitle = "Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)", month = may, year = "2024", address = "Torino, Italia", publisher = "ELRA and ICCL", url = "https://aclanthology.org/2024.lrec-main.8", pages = "91--101", abstract = "This paper proposes a canonical form for Multiword Expressions (MWEs), in particular for the Dutch language. The canonical form can be enriched with all kinds of annotations that can be used to describe the properties of the MWE and its components. It also introduces the DUCAME (DUtch CAnonical Multiword Expressions) lexical resource with more than 11k MWEs in canonical form. DUCAME is used in MWE-Finder to automatically generate queries for searching for flexible MWEs in large text corpora.", }
This paper proposes a canonical form for Multiword Expressions (MWEs), in particular for the Dutch language. The canonical form can be enriched with all kinds of annotations that can be used to describe the properties of the MWE and its components. It also introduces the DUCAME (DUtch CAnonical Multiword Expressions) lexical resource with more than 11k MWEs in canonical form. DUCAME is used in MWE-Finder to automatically generate queries for searching for flexible MWEs in large text corpora.
[ "Odijk, Jan", "Kroon, Martin" ]
A Canonical Form for Flexible Multiword Expressions
lrec-main.8
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.lrec-main.9.bib
https://aclanthology.org/2024.lrec-main.9/
@inproceedings{yu-etal-2024-cause, title = "A Cause-Effect Look at Alleviating Hallucination of Knowledge-grounded Dialogue Generation", author = "Yu, Jifan and Zhang, Xiaohan and Xu, Yifan and Lei, Xuanyu and Yao, Zijun and Zhang, Jing and Hou, Lei and Li, Juanzi", editor = "Calzolari, Nicoletta and Kan, Min-Yen and Hoste, Veronique and Lenci, Alessandro and Sakti, Sakriani and Xue, Nianwen", booktitle = "Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)", month = may, year = "2024", address = "Torino, Italia", publisher = "ELRA and ICCL", url = "https://aclanthology.org/2024.lrec-main.9", pages = "102--112", abstract = "Empowered by the large-scale pretrained language models, existing dialogue systems have demonstrated impressive performance conducting fluent and natural-sounding conversations. However, they are still plagued by the {\textless}b{\textgreater}hallucination{\textless}/b{\textgreater} problem, causing unpredictable factual errors in the generated responses. Recently, knowledge-grounded dialogue generation models, that intentionally invoke external knowledge resources to more informative responses, are also proven to be effective in reducing hallucination. Following the idea of getting high-quality knowledge, a few efforts have achieved pretty good performance on this issue. As some inevitable knowledge noises may also lead to hallucinations, it is emergent to investigate the reason and future directions for building noise-tolerant methods in KGD tasks. In this paper, we analyze the causal story behind this problem with counterfactual reasoning methods. Based on the causal effect analysis, we propose a possible solution for alleviating the hallucination in KGD by exploiting the dialogue-knowledge interaction. Experimental results of our example implementation show that this method can reduce hallucination without disrupting other dialogue performance, while keeping adaptive to different generation models. We hope our efforts can support and call for more attention to developing lightweight techniques towards robust and trusty dialogue systems.", }
Empowered by the large-scale pretrained language models, existing dialogue systems have demonstrated impressive performance conducting fluent and natural-sounding conversations. However, they are still plagued by the {\textless}b{\textgreater}hallucination{\textless}/b{\textgreater} problem, causing unpredictable factual errors in the generated responses. Recently, knowledge-grounded dialogue generation models, that intentionally invoke external knowledge resources to more informative responses, are also proven to be effective in reducing hallucination. Following the idea of getting high-quality knowledge, a few efforts have achieved pretty good performance on this issue. As some inevitable knowledge noises may also lead to hallucinations, it is emergent to investigate the reason and future directions for building noise-tolerant methods in KGD tasks. In this paper, we analyze the causal story behind this problem with counterfactual reasoning methods. Based on the causal effect analysis, we propose a possible solution for alleviating the hallucination in KGD by exploiting the dialogue-knowledge interaction. Experimental results of our example implementation show that this method can reduce hallucination without disrupting other dialogue performance, while keeping adaptive to different generation models. We hope our efforts can support and call for more attention to developing lightweight techniques towards robust and trusty dialogue systems.
[ "Yu, Jifan", "Zhang, Xiaohan", "Xu, Yifan", "Lei, Xuanyu", "Yao, Zijun", "Zhang, Jing", "Hou, Lei", "Li, Juanzi" ]
A Cause-Effect Look at Alleviating Hallucination of Knowledge-grounded Dialogue Generation
lrec-main.9
Poster
2404.03491
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.lrec-main.10.bib
https://aclanthology.org/2024.lrec-main.10/
@inproceedings{foley-etal-2024-access, title = "Access Control Framework for Language Collections", author = "Foley, Ben and Sefton, Peter and Musgrave, Simon and Sacal Bonequi, Moises", editor = "Calzolari, Nicoletta and Kan, Min-Yen and Hoste, Veronique and Lenci, Alessandro and Sakti, Sakriani and Xue, Nianwen", booktitle = "Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)", month = may, year = "2024", address = "Torino, Italia", publisher = "ELRA and ICCL", url = "https://aclanthology.org/2024.lrec-main.10", pages = "113--121", abstract = "This paper introduces the licence-based access control framework developed by the Language Data Commons of Australia (LDaCA) for a range of language collections, with examples given of implementation for significant Indigenous and Australian English collections. Language collections may be curated for many reasons, such as documentation for language revival, for research, security or commercial purposes. Some language collections are created with the intention of being {``}Open Access{''}; publicly available with no restriction. Other collections require that access be limited to individuals or groups of people, either at the collection level or at the level of individual items, such as a recording. To facilitate access, while respecting the intended access conditions for a collection, or collection items, some form of user identification and authorisation process is typically required. The access control framework described in this paper is based upon descriptions of access conditions in easy-to-read licences which are stored alongside data files in the collections; and is implemented using identity-based authentication and authorisation systems where required. The framework accommodates accessibility needs from unrestricted to extremely limited access, is dynamic, and able to be modified in response to changes in access needs. Storing licences with the data is a significant development in separating language data and access requirements from access infrastructure.", }
This paper introduces the licence-based access control framework developed by the Language Data Commons of Australia (LDaCA) for a range of language collections, with examples given of implementation for significant Indigenous and Australian English collections. Language collections may be curated for many reasons, such as documentation for language revival, for research, security or commercial purposes. Some language collections are created with the intention of being {``}Open Access{''}; publicly available with no restriction. Other collections require that access be limited to individuals or groups of people, either at the collection level or at the level of individual items, such as a recording. To facilitate access, while respecting the intended access conditions for a collection, or collection items, some form of user identification and authorisation process is typically required. The access control framework described in this paper is based upon descriptions of access conditions in easy-to-read licences which are stored alongside data files in the collections; and is implemented using identity-based authentication and authorisation systems where required. The framework accommodates accessibility needs from unrestricted to extremely limited access, is dynamic, and able to be modified in response to changes in access needs. Storing licences with the data is a significant development in separating language data and access requirements from access infrastructure.
[ "Foley, Ben", "Sefton, Peter", "Musgrave, Simon", "Sacal Bonequi, Moises" ]
Access Control Framework for Language Collections
lrec-main.10
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.lrec-main.11.bib
https://aclanthology.org/2024.lrec-main.11/
@inproceedings{niu-etal-2024-challenge, title = "A Challenge Dataset and Effective Models for Conversational Stance Detection", author = "Niu, Fuqiang and Yang, Min and Li, Ang and Zhang, Baoquan and Peng, Xiaojiang and Zhang, Bowen", editor = "Calzolari, Nicoletta and Kan, Min-Yen and Hoste, Veronique and Lenci, Alessandro and Sakti, Sakriani and Xue, Nianwen", booktitle = "Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)", month = may, year = "2024", address = "Torino, Italia", publisher = "ELRA and ICCL", url = "https://aclanthology.org/2024.lrec-main.11", pages = "122--132", abstract = "Previous stance detection studies typically concentrate on evaluating stances within individual instances, thereby exhibiting limitations in effectively modeling multi-party discussions concerning the same specific topic, as naturally transpire in authentic social media interactions. This constraint arises primarily due to the scarcity of datasets that authentically replicate real social media contexts, hindering the research progress of conversational stance detection. In this paper, we introduce a new multi-turn conversation stance detection dataset (called \textbf{MT-CSD}), which encompasses multiple targets for conversational stance detection. To derive stances from this challenging dataset, we propose a global-local attention network (\textbf{GLAN}) to address both long and short-range dependencies inherent in conversational data. Notably, even state-of-the-art stance detection methods, exemplified by GLAN, exhibit an accuracy of only 50.47{\%}, highlighting the persistent challenges in conversational stance detection. Furthermore, our MT-CSD dataset serves as a valuable resource to catalyze advancements in cross-domain stance detection, where a classifier is adapted from a different yet related target. We believe that MT-CSD will contribute to advancing real-world applications of stance detection research. Our source code, data, and models are available at \url{https://github.com/nfq729/MT-CSD}.", }
Previous stance detection studies typically concentrate on evaluating stances within individual instances, thereby exhibiting limitations in effectively modeling multi-party discussions concerning the same specific topic, as naturally transpire in authentic social media interactions. This constraint arises primarily due to the scarcity of datasets that authentically replicate real social media contexts, hindering the research progress of conversational stance detection. In this paper, we introduce a new multi-turn conversation stance detection dataset (called \textbf{MT-CSD}), which encompasses multiple targets for conversational stance detection. To derive stances from this challenging dataset, we propose a global-local attention network (\textbf{GLAN}) to address both long and short-range dependencies inherent in conversational data. Notably, even state-of-the-art stance detection methods, exemplified by GLAN, exhibit an accuracy of only 50.47{\%}, highlighting the persistent challenges in conversational stance detection. Furthermore, our MT-CSD dataset serves as a valuable resource to catalyze advancements in cross-domain stance detection, where a classifier is adapted from a different yet related target. We believe that MT-CSD will contribute to advancing real-world applications of stance detection research. Our source code, data, and models are available at \url{https://github.com/nfq729/MT-CSD}.
[ "Niu, Fuqiang", "Yang, Min", "Li, Ang", "Zhang, Baoquan", "Peng, Xiaojiang", "Zhang, Bowen" ]
A Challenge Dataset and Effective Models for Conversational Stance Detection
lrec-main.11
Poster
2403.11145
[ "https://github.com/nfq729/mt-csd" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.lrec-main.12.bib
https://aclanthology.org/2024.lrec-main.12/
@inproceedings{laskina-etal-2024-closer, title = "A Closer Look at Clustering Bilingual Comparable Corpora", author = "Laskina, Anna and Gaussier, Eric and Calvary, Gaelle", editor = "Calzolari, Nicoletta and Kan, Min-Yen and Hoste, Veronique and Lenci, Alessandro and Sakti, Sakriani and Xue, Nianwen", booktitle = "Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)", month = may, year = "2024", address = "Torino, Italia", publisher = "ELRA and ICCL", url = "https://aclanthology.org/2024.lrec-main.12", pages = "133--142", abstract = "We study in this paper the problem of clustering comparable corpora, building upon the observation that different types of clusters can be present in such corpora: monolingual clusters comprising documents in a single language, and bilingual or multilingual clusters comprising documents written in different languages. Based on a state-of-the-art deep variant of Kmeans, we propose new clustering models fully adapted to comparable corpora and illustrate their behavior on several bilingual collections (in English, French, German and Russian) created from Wikipedia.", }
We study in this paper the problem of clustering comparable corpora, building upon the observation that different types of clusters can be present in such corpora: monolingual clusters comprising documents in a single language, and bilingual or multilingual clusters comprising documents written in different languages. Based on a state-of-the-art deep variant of Kmeans, we propose new clustering models fully adapted to comparable corpora and illustrate their behavior on several bilingual collections (in English, French, German and Russian) created from Wikipedia.
[ "Laskina, Anna", "Gaussier, Eric", "Calvary, Gaelle" ]
A Closer Look at Clustering Bilingual Comparable Corpora
lrec-main.12
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.lrec-main.13.bib
https://aclanthology.org/2024.lrec-main.13/
@inproceedings{lee-parde-2024-acnempathize, title = "{A}cn{E}mpathize: A Dataset for Understanding Empathy in Dermatology Conversations", author = "Lee, Gyeongeun and Parde, Natalie", editor = "Calzolari, Nicoletta and Kan, Min-Yen and Hoste, Veronique and Lenci, Alessandro and Sakti, Sakriani and Xue, Nianwen", booktitle = "Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)", month = may, year = "2024", address = "Torino, Italia", publisher = "ELRA and ICCL", url = "https://aclanthology.org/2024.lrec-main.13", pages = "143--153", abstract = "Empathy is critical for effective communication and mental health support, and in many online health communities people anonymously engage in conversations to seek and provide empathetic support. The ability to automatically recognize and detect empathy contributes to the understanding of human emotions expressed in text, therefore advancing natural language understanding across various domains. Existing empathy and mental health-related corpora focus on broader contexts and lack domain specificity, but similarly to other tasks (e.g., learning distinct patterns associated with COVID-19 versus skin allergies in clinical notes), observing empathy within different domains is crucial to providing tailored support. To address this need, we introduce AcnEmpathize, a dataset that captures empathy expressed in acne-related discussions from forum posts focused on its emotional and psychological effects. We find that transformer-based models trained on our dataset demonstrate excellent performance at empathy classification. Our dataset is publicly released to facilitate analysis of domain-specific empathy in online conversations and advance research in this challenging and intriguing domain.", }
Empathy is critical for effective communication and mental health support, and in many online health communities people anonymously engage in conversations to seek and provide empathetic support. The ability to automatically recognize and detect empathy contributes to the understanding of human emotions expressed in text, therefore advancing natural language understanding across various domains. Existing empathy and mental health-related corpora focus on broader contexts and lack domain specificity, but similarly to other tasks (e.g., learning distinct patterns associated with COVID-19 versus skin allergies in clinical notes), observing empathy within different domains is crucial to providing tailored support. To address this need, we introduce AcnEmpathize, a dataset that captures empathy expressed in acne-related discussions from forum posts focused on its emotional and psychological effects. We find that transformer-based models trained on our dataset demonstrate excellent performance at empathy classification. Our dataset is publicly released to facilitate analysis of domain-specific empathy in online conversations and advance research in this challenging and intriguing domain.
[ "Lee, Gyeongeun", "Parde, Natalie" ]
AcnEmpathize: A Dataset for Understanding Empathy in Dermatology Conversations
lrec-main.13
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.lrec-main.14.bib
https://aclanthology.org/2024.lrec-main.14/
@inproceedings{ward-marco-2024-collection, title = "A Collection of Pragmatic-Similarity Judgments over Spoken Dialog Utterances", author = "Ward, Nigel and Marco, Divette", editor = "Calzolari, Nicoletta and Kan, Min-Yen and Hoste, Veronique and Lenci, Alessandro and Sakti, Sakriani and Xue, Nianwen", booktitle = "Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)", month = may, year = "2024", address = "Torino, Italia", publisher = "ELRA and ICCL", url = "https://aclanthology.org/2024.lrec-main.14", pages = "154--163", abstract = "Automatic measures of similarity between sentences or utterances are invaluable for training speech synthesizers, evaluating machine translation, and assessing learner productions. While there exist measures for semantic similarity and prosodic similarity, there are as yet none for pragmatic similarity. To enable the training of such measures, we developed the first collection of human judgments of pragmatic similarity between utterance pairs. 9 judges listened to 220 utterance pairs, each consisting of an utterance extracted from a recorded dialog and a re-enactment of that utterance under various conditions designed to create various degrees of similarity. Each pair was rated on a continuous scale. The average inter-judge correlation was 0.45. We make this data available at https://github.com/divettemarco/PragSim .", }
Automatic measures of similarity between sentences or utterances are invaluable for training speech synthesizers, evaluating machine translation, and assessing learner productions. While there exist measures for semantic similarity and prosodic similarity, there are as yet none for pragmatic similarity. To enable the training of such measures, we developed the first collection of human judgments of pragmatic similarity between utterance pairs. 9 judges listened to 220 utterance pairs, each consisting of an utterance extracted from a recorded dialog and a re-enactment of that utterance under various conditions designed to create various degrees of similarity. Each pair was rated on a continuous scale. The average inter-judge correlation was 0.45. We make this data available at https://github.com/divettemarco/PragSim .
[ "Ward, Nigel", "Marco, Divette" ]
A Collection of Pragmatic-Similarity Judgments over Spoken Dialog Utterances
lrec-main.14
Poster
2403.14808
[ "https://github.com/divettemarco/pragsim" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.lrec-main.15.bib
https://aclanthology.org/2024.lrec-main.15/
@inproceedings{fernandes-etal-2024-community, title = "A Community-Driven Data-to-Text Platform for Football Match Summaries", author = "Fernandes, Pedro and Nunes, S{\'e}rgio and Santos, Lu{\'\i}s", editor = "Calzolari, Nicoletta and Kan, Min-Yen and Hoste, Veronique and Lenci, Alessandro and Sakti, Sakriani and Xue, Nianwen", booktitle = "Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)", month = may, year = "2024", address = "Torino, Italia", publisher = "ELRA and ICCL", url = "https://aclanthology.org/2024.lrec-main.15", pages = "164--173", abstract = "Data-to-text systems offer a transformative approach to generating textual content in data-rich environments. This paper describes the architecture and deployment of Prosebot, a community-driven data-to-text platform tailored for generating textual summaries of football matches derived from match statistics. The system enhances the visibility of lower-tier matches, traditionally accessible only through data tables. Prosebot uses a template-based Natural Language Generation (NLG) module to generate initial drafts, which are subsequently refined by the reading community. Comprehensive evaluations, encompassing both human-mediated and automated assessments, were conducted to assess the system{'}s efficacy. Analysis of the community-edited texts reveals that significant segments of the initial automated drafts are retained, suggesting their high quality and acceptance by the collaborators. Preliminary surveys conducted among platform users highlight a predominantly positive reception within the community.", }
Data-to-text systems offer a transformative approach to generating textual content in data-rich environments. This paper describes the architecture and deployment of Prosebot, a community-driven data-to-text platform tailored for generating textual summaries of football matches derived from match statistics. The system enhances the visibility of lower-tier matches, traditionally accessible only through data tables. Prosebot uses a template-based Natural Language Generation (NLG) module to generate initial drafts, which are subsequently refined by the reading community. Comprehensive evaluations, encompassing both human-mediated and automated assessments, were conducted to assess the system{'}s efficacy. Analysis of the community-edited texts reveals that significant segments of the initial automated drafts are retained, suggesting their high quality and acceptance by the collaborators. Preliminary surveys conducted among platform users highlight a predominantly positive reception within the community.
[ "Fern", "es, Pedro", "Nunes, S{\\'e}rgio", "Santos, Lu{\\'\\i}s" ]
A Community-Driven Data-to-Text Platform for Football Match Summaries
lrec-main.15
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.lrec-main.16.bib
https://aclanthology.org/2024.lrec-main.16/
@inproceedings{meisenbacher-etal-2024-comparative, title = "A Comparative Analysis of Word-Level Metric Differential Privacy: Benchmarking the Privacy-Utility Trade-off", author = "Meisenbacher, Stephen and Nandakumar, Nihildev and Klymenko, Alexandra and Matthes, Florian", editor = "Calzolari, Nicoletta and Kan, Min-Yen and Hoste, Veronique and Lenci, Alessandro and Sakti, Sakriani and Xue, Nianwen", booktitle = "Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)", month = may, year = "2024", address = "Torino, Italia", publisher = "ELRA and ICCL", url = "https://aclanthology.org/2024.lrec-main.16", pages = "174--185", abstract = "The application of Differential Privacy to Natural Language Processing techniques has emerged in relevance in recent years, with an increasing number of studies published in established NLP outlets. In particular, the adaptation of Differential Privacy for use in NLP tasks has first focused on the *word-level*, where calibrated noise is added to word embedding vectors to achieve {``}noisy{''} representations. To this end, several implementations have appeared in the literature, each presenting an alternative method of achieving word-level Differential Privacy. Although each of these includes its own evaluation, no comparative analysis has been performed to investigate the performance of such methods relative to each other. In this work, we conduct such an analysis, comparing seven different algorithms on two NLP tasks with varying hyperparameters, including the *epsilon* parameter, or privacy budget. In addition, we provide an in-depth analysis of the results with a focus on the privacy-utility trade-off, as well as open-source our implementation code for further reproduction. As a result of our analysis, we give insight into the benefits and challenges of word-level Differential Privacy, and accordingly, we suggest concrete steps forward for the research field.", }
The application of Differential Privacy to Natural Language Processing techniques has emerged in relevance in recent years, with an increasing number of studies published in established NLP outlets. In particular, the adaptation of Differential Privacy for use in NLP tasks has first focused on the *word-level*, where calibrated noise is added to word embedding vectors to achieve {``}noisy{''} representations. To this end, several implementations have appeared in the literature, each presenting an alternative method of achieving word-level Differential Privacy. Although each of these includes its own evaluation, no comparative analysis has been performed to investigate the performance of such methods relative to each other. In this work, we conduct such an analysis, comparing seven different algorithms on two NLP tasks with varying hyperparameters, including the *epsilon* parameter, or privacy budget. In addition, we provide an in-depth analysis of the results with a focus on the privacy-utility trade-off, as well as open-source our implementation code for further reproduction. As a result of our analysis, we give insight into the benefits and challenges of word-level Differential Privacy, and accordingly, we suggest concrete steps forward for the research field.
[ "Meisenbacher, Stephen", "N", "akumar, Nihildev", "Klymenko, Alex", "ra", "Matthes, Florian" ]
A Comparative Analysis of Word-Level Metric Differential Privacy: Benchmarking the Privacy-Utility Trade-off
lrec-main.16
Poster
2404.03324
[ "https://github.com/sjmeis/mldp" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.lrec-main.17.bib
https://aclanthology.org/2024.lrec-main.17/
@inproceedings{zhao-etal-2024-comparative, title = "A Comparative Study of Explicit and Implicit Gender Biases in Large Language Models via Self-evaluation", author = "Zhao, Yachao and Wang, Bo and Wang, Yan and Zhao, Dongming and Jin, Xiaojia and Zhang, Jijun and He, Ruifang and Hou, Yuexian", editor = "Calzolari, Nicoletta and Kan, Min-Yen and Hoste, Veronique and Lenci, Alessandro and Sakti, Sakriani and Xue, Nianwen", booktitle = "Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)", month = may, year = "2024", address = "Torino, Italia", publisher = "ELRA and ICCL", url = "https://aclanthology.org/2024.lrec-main.17", pages = "186--198", abstract = "While extensive work has examined the explicit and implicit biases in large language models (LLMs), little research explores the relation between these two types of biases. This paper presents a comparative study of the explicit and implicit biases in LLMs grounded in social psychology. Social psychology distinguishes between explicit and implicit biases by whether the bias can be self-recognized by individuals. Aligning with this conceptualization, we propose a self-evaluation-based two-stage measurement of explicit and implicit biases within LLMs. First, the LLM is prompted to automatically fill templates with social targets to measure implicit bias toward these targets, where the bias is less likely to be self-recognized by the LLM. Then, the LLM is prompted to self-evaluate the templates filled by itself to measure explicit bias toward the same targets, where the bias is more likely to be self-recognized by the LLM. Experiments conducted on state-of-the-art LLMs reveal human-like inconsistency between explicit and implicit occupational gender biases. This work bridges a critical gap where prior studies concentrate solely on either explicit or implicit bias. We advocate that future work highlight the relation between explicit and implicit biases in LLMs.", }
While extensive work has examined the explicit and implicit biases in large language models (LLMs), little research explores the relation between these two types of biases. This paper presents a comparative study of the explicit and implicit biases in LLMs grounded in social psychology. Social psychology distinguishes between explicit and implicit biases by whether the bias can be self-recognized by individuals. Aligning with this conceptualization, we propose a self-evaluation-based two-stage measurement of explicit and implicit biases within LLMs. First, the LLM is prompted to automatically fill templates with social targets to measure implicit bias toward these targets, where the bias is less likely to be self-recognized by the LLM. Then, the LLM is prompted to self-evaluate the templates filled by itself to measure explicit bias toward the same targets, where the bias is more likely to be self-recognized by the LLM. Experiments conducted on state-of-the-art LLMs reveal human-like inconsistency between explicit and implicit occupational gender biases. This work bridges a critical gap where prior studies concentrate solely on either explicit or implicit bias. We advocate that future work highlight the relation between explicit and implicit biases in LLMs.
[ "Zhao, Yachao", "Wang, Bo", "Wang, Yan", "Zhao, Dongming", "Jin, Xiaojia", "Zhang, Jijun", "He, Ruifang", "Hou, Yuexian" ]
A Comparative Study of Explicit and Implicit Gender Biases in Large Language Models via Self-evaluation
lrec-main.17
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.lrec-main.18.bib
https://aclanthology.org/2024.lrec-main.18/
@inproceedings{caporusso-etal-2024-computational, title = "A Computational Analysis of the Dehumanisation of Migrants from Syria and {U}kraine in {S}lovene News Media", author = "Caporusso, Jaya and Hoogland, Damar and Brglez, Mojca and Koloski, Boshko and Purver, Matthew and Pollak, Senja", editor = "Calzolari, Nicoletta and Kan, Min-Yen and Hoste, Veronique and Lenci, Alessandro and Sakti, Sakriani and Xue, Nianwen", booktitle = "Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)", month = may, year = "2024", address = "Torino, Italia", publisher = "ELRA and ICCL", url = "https://aclanthology.org/2024.lrec-main.18", pages = "199--210", abstract = "Dehumanisation involves the perception and/or treatment of a social group{'}s members as less than human. This phenomenon is rarely addressed with computational linguistic techniques. We adapt a recently proposed approach for English, making it easier to transfer to other languages and to evaluate, introducing a new sentiment resource, the use of zero-shot cross-lingual valence and arousal detection, and a new method for statistical significance testing. We then apply it to study attitudes to migration expressed in Slovene newspapers, to examine changes in the Slovene discourse on migration between the 2015-16 migration crisis following the war in Syria and the 2022-23 period following the war in Ukraine. We find that while this discourse became more negative and more intense over time, it is less dehumanising when specifically addressing Ukrainian migrants compared to others.", }
Dehumanisation involves the perception and/or treatment of a social group{'}s members as less than human. This phenomenon is rarely addressed with computational linguistic techniques. We adapt a recently proposed approach for English, making it easier to transfer to other languages and to evaluate, introducing a new sentiment resource, the use of zero-shot cross-lingual valence and arousal detection, and a new method for statistical significance testing. We then apply it to study attitudes to migration expressed in Slovene newspapers, to examine changes in the Slovene discourse on migration between the 2015-16 migration crisis following the war in Syria and the 2022-23 period following the war in Ukraine. We find that while this discourse became more negative and more intense over time, it is less dehumanising when specifically addressing Ukrainian migrants compared to others.
[ "Caporusso, Jaya", "Hoogl", ", Damar", "Brglez, Mojca", "Koloski, Boshko", "Purver, Matthew", "Pollak, Senja" ]
A Computational Analysis of the Dehumanisation of Migrants from Syria and Ukraine in Slovene News Media
lrec-main.18
Poster
2404.07036
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.lrec-main.19.bib
https://aclanthology.org/2024.lrec-main.19/
@inproceedings{nagata-etal-2024-computational, title = "A Computational Approach to Quantifying Grammaticization of {E}nglish Deverbal Prepositions", author = "Nagata, Ryo and Kawasaki, Yoshifumi and Otani, Naoki and Takamura, Hiroya", editor = "Calzolari, Nicoletta and Kan, Min-Yen and Hoste, Veronique and Lenci, Alessandro and Sakti, Sakriani and Xue, Nianwen", booktitle = "Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)", month = may, year = "2024", address = "Torino, Italia", publisher = "ELRA and ICCL", url = "https://aclanthology.org/2024.lrec-main.19", pages = "211--220", abstract = "This paper explores grammaticization of deverbal prepositions by a computational approach based on corpus data. Deverbal prepositions are words or phrases that are derived from a verb and that behave as a preposition such as {``}regarding{''} and {``}according to{''}. Linguistic studies have revealed important aspects of grammaticization of deverbal prepositions. This paper augments them by methods for measuring the degree of grammaticization of deverbal prepositions based on non-contextualized or contextualized word vectors. Experiments show that the methods correlate well with human judgements (as high as 0.69 in Spearman{'}s rank correlation coefficient). Using the best-performing method, this paper further shows that the methods support previous findings in linguistics including (i) Deverbal prepositions are marginal in terms of prepositionality; and (ii) The process where verbs are grammaticized into prepositions is gradual. As a pilot study, it also conducts a diachronic analysis of grammaticization of deverbal preposition.", }
This paper explores grammaticization of deverbal prepositions by a computational approach based on corpus data. Deverbal prepositions are words or phrases that are derived from a verb and that behave as a preposition such as {``}regarding{''} and {``}according to{''}. Linguistic studies have revealed important aspects of grammaticization of deverbal prepositions. This paper augments them by methods for measuring the degree of grammaticization of deverbal prepositions based on non-contextualized or contextualized word vectors. Experiments show that the methods correlate well with human judgements (as high as 0.69 in Spearman{'}s rank correlation coefficient). Using the best-performing method, this paper further shows that the methods support previous findings in linguistics including (i) Deverbal prepositions are marginal in terms of prepositionality; and (ii) The process where verbs are grammaticized into prepositions is gradual. As a pilot study, it also conducts a diachronic analysis of grammaticization of deverbal preposition.
[ "Nagata, Ryo", "Kawasaki, Yoshifumi", "Otani, Naoki", "Takamura, Hiroya" ]
A Computational Approach to Quantifying Grammaticization of English Deverbal Prepositions
lrec-main.19
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.lrec-main.20.bib
https://aclanthology.org/2024.lrec-main.20/
@inproceedings{paikens-etal-2024-computational, title = "A Computational Model of {L}atvian Morphology", author = "Paikens, Peteris and Pretkalni{\c{n}}a, Lauma and Rituma, Laura", editor = "Calzolari, Nicoletta and Kan, Min-Yen and Hoste, Veronique and Lenci, Alessandro and Sakti, Sakriani and Xue, Nianwen", booktitle = "Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)", month = may, year = "2024", address = "Torino, Italia", publisher = "ELRA and ICCL", url = "https://aclanthology.org/2024.lrec-main.20", pages = "221--232", abstract = "In this paper we describe a computational model of Latvian morphology that provides a formal structure for Latvian word form inflection and has been implemented in software for generation, analysis and lemmatization of Latvian word forms. The work was motivated by the need for a NLP inflection model that can cover all the complexity of Latvian language and explicitly enumerate and handle the many exceptions to the general Latvian inflection principles. This is an evolution of earlier work, extending the initial proof of concept model to properly cover Latvian language. We provide a set of morphological paradigms that differ from current linguistic tradition, a set of systematic stem changes and combine it with an extensive lexicon that includes paradigm information and structured morphological attributes for 118 000 lexemes. This model has been applied on both dictionary and corpora data, demonstrating that it provides a good coverage for modern Latvian literary language. We also consider that there is a good potential to extend this also to the related Latgalian language.", }
In this paper we describe a computational model of Latvian morphology that provides a formal structure for Latvian word form inflection and has been implemented in software for generation, analysis and lemmatization of Latvian word forms. The work was motivated by the need for a NLP inflection model that can cover all the complexity of Latvian language and explicitly enumerate and handle the many exceptions to the general Latvian inflection principles. This is an evolution of earlier work, extending the initial proof of concept model to properly cover Latvian language. We provide a set of morphological paradigms that differ from current linguistic tradition, a set of systematic stem changes and combine it with an extensive lexicon that includes paradigm information and structured morphological attributes for 118 000 lexemes. This model has been applied on both dictionary and corpora data, demonstrating that it provides a good coverage for modern Latvian literary language. We also consider that there is a good potential to extend this also to the related Latgalian language.
[ "Paikens, Peteris", "Pretkalni{\\c{n}}a, Lauma", "Rituma, Laura" ]
A Computational Model of Latvian Morphology
lrec-main.20
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.lrec-main.21.bib
https://aclanthology.org/2024.lrec-main.21/
@inproceedings{gerlach-etal-2024-concept, title = "A Concept Based Approach for Translation of Medical Dialogues into Pictographs", author = "Gerlach, Johanna and Bouillon, Pierrette and Mutal, Jonathan and Spechbach, Herv{\'e}", editor = "Calzolari, Nicoletta and Kan, Min-Yen and Hoste, Veronique and Lenci, Alessandro and Sakti, Sakriani and Xue, Nianwen", booktitle = "Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)", month = may, year = "2024", address = "Torino, Italia", publisher = "ELRA and ICCL", url = "https://aclanthology.org/2024.lrec-main.21", pages = "233--242", abstract = "Pictographs have been found to improve patient comprehension of medical information or instructions. However, tools to produce pictograph representations from natural language are still scarce. In this contribution we describe a system that automatically translates French speech into pictographs to enable diagnostic interviews in emergency settings, thereby providing a tool to overcome the language barrier or provide support in Augmentative and Alternative Communication (AAC) contexts. Our approach is based on a semantic gloss that serves as pivot between spontaneous language and pictographs, with medical concepts represented using the UMLS ontology. In this study we evaluate different available pre-trained models fine-tuned on artificial data to translate French into this semantic gloss. On unseen data collected in real settings, consisting of questions and instructions by physicians, the best model achieves an F0.5 score of 86.7. A complementary human evaluation of the semantic glosses differing from the reference shows that 71{\%} of these would be usable to transmit the intended meaning. Finally, a human evaluation of the pictograph sequences derived from the gloss reveals very few additions, omissions or order issues ({\textless}3{\%}), suggesting that the gloss as designed is well suited as a pivot for translation into pictographs.", }
Pictographs have been found to improve patient comprehension of medical information or instructions. However, tools to produce pictograph representations from natural language are still scarce. In this contribution we describe a system that automatically translates French speech into pictographs to enable diagnostic interviews in emergency settings, thereby providing a tool to overcome the language barrier or provide support in Augmentative and Alternative Communication (AAC) contexts. Our approach is based on a semantic gloss that serves as pivot between spontaneous language and pictographs, with medical concepts represented using the UMLS ontology. In this study we evaluate different available pre-trained models fine-tuned on artificial data to translate French into this semantic gloss. On unseen data collected in real settings, consisting of questions and instructions by physicians, the best model achieves an F0.5 score of 86.7. A complementary human evaluation of the semantic glosses differing from the reference shows that 71{\%} of these would be usable to transmit the intended meaning. Finally, a human evaluation of the pictograph sequences derived from the gloss reveals very few additions, omissions or order issues ({\textless}3{\%}), suggesting that the gloss as designed is well suited as a pivot for translation into pictographs.
[ "Gerlach, Johanna", "Bouillon, Pierrette", "Mutal, Jonathan", "Spechbach, Herv{\\'e}" ]
A Concept Based Approach for Translation of Medical Dialogues into Pictographs
lrec-main.21
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.lrec-main.22.bib
https://aclanthology.org/2024.lrec-main.22/
@inproceedings{bonial-tayyar-madabushi-2024-construction, title = "A Construction Grammar Corpus of Varying Schematicity: A Dataset for the Evaluation of Abstractions in Language Models", author = "Bonial, Claire and Tayyar Madabushi, Harish", editor = "Calzolari, Nicoletta and Kan, Min-Yen and Hoste, Veronique and Lenci, Alessandro and Sakti, Sakriani and Xue, Nianwen", booktitle = "Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)", month = may, year = "2024", address = "Torino, Italia", publisher = "ELRA and ICCL", url = "https://aclanthology.org/2024.lrec-main.22", pages = "243--255", abstract = "Large Language Models (LLMs) have been developed without a theoretical framework, yet we posit that evaluating and improving LLMs will benefit from the development of theoretical frameworks that enable comparison of the structures of human language and the model of language built up by LLMs through the processing of text. In service of this goal, we develop the Construction Grammar Schematicity ({``}CoGS{''}) corpus of 10 distinct English constructions, where the constructions vary with respect to schematicity, or in other words the level to which constructional slots require specific, fixed lexical items, or can be filled with a variety of elements that fulfill a particular semantic role of the slot. Our corpus constructions are carefully curated to range from substantive, frozen constructions (e.g., Let-alone) to entirely schematic constructions (e.g., Resultative). The corpus was collected to allow us to probe LLMs for constructional information at varying levels of abstraction. We present our own probing experiments using this corpus, which clearly demonstrate that even the largest LLMs are limited to more substantive constructions and do not exhibit recognition of the similarity of purely schematic constructions. We publicly release our dataset, prompts, and associated model responses.", }
Large Language Models (LLMs) have been developed without a theoretical framework, yet we posit that evaluating and improving LLMs will benefit from the development of theoretical frameworks that enable comparison of the structures of human language and the model of language built up by LLMs through the processing of text. In service of this goal, we develop the Construction Grammar Schematicity ({``}CoGS{''}) corpus of 10 distinct English constructions, where the constructions vary with respect to schematicity, or in other words the level to which constructional slots require specific, fixed lexical items, or can be filled with a variety of elements that fulfill a particular semantic role of the slot. Our corpus constructions are carefully curated to range from substantive, frozen constructions (e.g., Let-alone) to entirely schematic constructions (e.g., Resultative). The corpus was collected to allow us to probe LLMs for constructional information at varying levels of abstraction. We present our own probing experiments using this corpus, which clearly demonstrate that even the largest LLMs are limited to more substantive constructions and do not exhibit recognition of the similarity of purely schematic constructions. We publicly release our dataset, prompts, and associated model responses.
[ "Bonial, Claire", "Tayyar Madabushi, Harish" ]
A Construction Grammar Corpus of Varying Schematicity: A Dataset for the Evaluation of Abstractions in Language Models
lrec-main.22
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.lrec-main.23.bib
https://aclanthology.org/2024.lrec-main.23/
@inproceedings{porada-etal-2024-controlled, title = "A Controlled Reevaluation of Coreference Resolution Models", author = "Porada, Ian and Zou, Xiyuan and Cheung, Jackie Chi Kit", editor = "Calzolari, Nicoletta and Kan, Min-Yen and Hoste, Veronique and Lenci, Alessandro and Sakti, Sakriani and Xue, Nianwen", booktitle = "Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)", month = may, year = "2024", address = "Torino, Italia", publisher = "ELRA and ICCL", url = "https://aclanthology.org/2024.lrec-main.23", pages = "256--263", abstract = "All state-of-the-art coreference resolution (CR) models involve finetuning a pretrained language model. Whether the superior performance of one CR model over another is due to the choice of language model or other factors, such as the task-specific architecture, is difficult or impossible to determine due to lack of a standardized experimental setup. To resolve this ambiguity, we systematically evaluate five CR models and control for certain design decisions including the pretrained language model used by each. When controlling for language model size, encoder-based CR models outperform more recent decoder-based models in terms of both accuracy and inference speed. Surprisingly, among encoder-based CR models, more recent models are not always more accurate, and the oldest CR model that we test generalizes the best to out-of-domain textual genres. We conclude that controlling for the choice of language model reduces most, but not all, of the increase in F1 score reported in the past five years.", }
All state-of-the-art coreference resolution (CR) models involve finetuning a pretrained language model. Whether the superior performance of one CR model over another is due to the choice of language model or other factors, such as the task-specific architecture, is difficult or impossible to determine due to lack of a standardized experimental setup. To resolve this ambiguity, we systematically evaluate five CR models and control for certain design decisions including the pretrained language model used by each. When controlling for language model size, encoder-based CR models outperform more recent decoder-based models in terms of both accuracy and inference speed. Surprisingly, among encoder-based CR models, more recent models are not always more accurate, and the oldest CR model that we test generalizes the best to out-of-domain textual genres. We conclude that controlling for the choice of language model reduces most, but not all, of the increase in F1 score reported in the past five years.
[ "Porada, Ian", "Zou, Xiyuan", "Cheung, Jackie Chi Kit" ]
A Controlled Reevaluation of Coreference Resolution Models
lrec-main.23
Poster
2404.00727
[ "https://github.com/ianporada/coref-reeval" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.lrec-main.24.bib
https://aclanthology.org/2024.lrec-main.24/
@inproceedings{li-etal-2024-corpus, title = "A Corpus and Method for {C}hinese Named Entity Recognition in Manufacturing", author = "Li, Ruiting and Wang, Peiyan and Wang, Libang and Yang, Danqingxin and Cai, Dongfeng", editor = "Calzolari, Nicoletta and Kan, Min-Yen and Hoste, Veronique and Lenci, Alessandro and Sakti, Sakriani and Xue, Nianwen", booktitle = "Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)", month = may, year = "2024", address = "Torino, Italia", publisher = "ELRA and ICCL", url = "https://aclanthology.org/2024.lrec-main.24", pages = "264--272", abstract = "Manufacturing specifications are documents entailing different techniques, processes, and components involved in manufacturing. There is a growing demand for named entity recognition (NER) resources and techniques for manufacturing-specific named entities, with the development of smart manufacturing. In this paper, we introduce a corpus of Chinese manufacturing specifications, named MS-NERC, including 4,424 sentences and 16,383 entities. We also propose an entity recognizer named Trainable State Transducer (TST), which is initialized with a finite state transducer describing the morphological patterns of entities. It can directly recognize entities based on prior morphological knowledge without training. Experimental results show that TST achieves an overall 82.05{\%} F1 score for morphological-specific entities in zero-shot. TST can be improved through training, the result of which outperforms neural methods in few-shot and rich-resource. We believe that our corpus and model will be valuable resources for NER research not only in manufacturing but also in other low-resource domains.", }
Manufacturing specifications are documents entailing different techniques, processes, and components involved in manufacturing. There is a growing demand for named entity recognition (NER) resources and techniques for manufacturing-specific named entities, with the development of smart manufacturing. In this paper, we introduce a corpus of Chinese manufacturing specifications, named MS-NERC, including 4,424 sentences and 16,383 entities. We also propose an entity recognizer named Trainable State Transducer (TST), which is initialized with a finite state transducer describing the morphological patterns of entities. It can directly recognize entities based on prior morphological knowledge without training. Experimental results show that TST achieves an overall 82.05{\%} F1 score for morphological-specific entities in zero-shot. TST can be improved through training, the result of which outperforms neural methods in few-shot and rich-resource. We believe that our corpus and model will be valuable resources for NER research not only in manufacturing but also in other low-resource domains.
[ "Li, Ruiting", "Wang, Peiyan", "Wang, Libang", "Yang, Danqingxin", "Cai, Dongfeng" ]
A Corpus and Method for Chinese Named Entity Recognition in Manufacturing
lrec-main.24
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.lrec-main.25.bib
https://aclanthology.org/2024.lrec-main.25/
@inproceedings{antici-etal-2024-corpus, title = "A Corpus for Sentence-Level Subjectivity Detection on {E}nglish News Articles", author = "Antici, Francesco and Ruggeri, Federico and Galassi, Andrea and Korre, Katerina and Muti, Arianna and Bardi, Alessandra and Fedotova, Alice and Barr{\'o}n-Cede{\~n}o, Alberto", editor = "Calzolari, Nicoletta and Kan, Min-Yen and Hoste, Veronique and Lenci, Alessandro and Sakti, Sakriani and Xue, Nianwen", booktitle = "Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)", month = may, year = "2024", address = "Torino, Italia", publisher = "ELRA and ICCL", url = "https://aclanthology.org/2024.lrec-main.25", pages = "273--285", abstract = "We develop novel annotation guidelines for sentence-level subjectivity detection, which are not limited to language-specific cues. We use our guidelines to collect NewsSD-ENG, a corpus of 638 objective and 411 subjective sentences extracted from English news articles on controversial topics. Our corpus paves the way for subjectivity detection in English and across other languages without relying on language-specific tools, such as lexicons or machine translation. We evaluate state-of-the-art multilingual transformer-based models on the task in mono-, multi-, and cross-language settings. For this purpose, we re-annotate an existing Italian corpus. We observe that models trained in the multilingual setting achieve the best performance on the task.", }
We develop novel annotation guidelines for sentence-level subjectivity detection, which are not limited to language-specific cues. We use our guidelines to collect NewsSD-ENG, a corpus of 638 objective and 411 subjective sentences extracted from English news articles on controversial topics. Our corpus paves the way for subjectivity detection in English and across other languages without relying on language-specific tools, such as lexicons or machine translation. We evaluate state-of-the-art multilingual transformer-based models on the task in mono-, multi-, and cross-language settings. For this purpose, we re-annotate an existing Italian corpus. We observe that models trained in the multilingual setting achieve the best performance on the task.
[ "Antici, Francesco", "Ruggeri, Federico", "Galassi, Andrea", "Korre, Katerina", "Muti, Arianna", "Bardi, Aless", "ra", "Fedotova, Alice", "Barr{\\'o}n-Cede{\\~n}o, Alberto" ]
A Corpus for Sentence-Level Subjectivity Detection on English News Articles
lrec-main.25
Poster
2305.18034
[ "https://github.com/lt-nlp-lab-unibo/newssd-eng" ]
https://huggingface.co./papers/2305.18034
0
0
0
8
1
[]
[ "tasksource/subjectivity" ]
[]
https://aclanthology.org/2024.lrec-main.26.bib
https://aclanthology.org/2024.lrec-main.26/
@inproceedings{otto-etal-2024-corpus, title = "A Corpus of {G}erman {A}bstract {M}eaning {R}epresentation ({D}e{AMR})", author = "Otto, Christoph and Groschwitz, Jonas and Koller, Alexander and Yang, Xiulin and Donatelli, Lucia", editor = "Calzolari, Nicoletta and Kan, Min-Yen and Hoste, Veronique and Lenci, Alessandro and Sakti, Sakriani and Xue, Nianwen", booktitle = "Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)", month = may, year = "2024", address = "Torino, Italia", publisher = "ELRA and ICCL", url = "https://aclanthology.org/2024.lrec-main.26", pages = "286--292", abstract = "We present the first comprehensive set of guidelines for German Abstract Meaning Representation (Deutsche AMR, DeAMR) along with an annotated corpus of 400 DeAMR. Taking English AMR (EnAMR) as our starting point, we propose significant adaptations to faithfully represent the structure and semantics of German, focusing particularly on verb frames, compound words, and modality. We validate our annotation through inter-annotator agreement and further evaluate our corpus with a comparison of structural divergences between EnAMR and DeAMR on parallel sentences, replicating previous work that finds both cases of cross-lingual structural alignment and cases of meaningful linguistic divergence. Finally, we fine-tune state-of-the-art multi-lingual and cross-lingual AMR parsers on our corpus and find that, while our small corpus is insufficient to produce quality output, there is a need to continue develop and evaluate against gold non-English AMR data.", }
We present the first comprehensive set of guidelines for German Abstract Meaning Representation (Deutsche AMR, DeAMR) along with an annotated corpus of 400 DeAMR. Taking English AMR (EnAMR) as our starting point, we propose significant adaptations to faithfully represent the structure and semantics of German, focusing particularly on verb frames, compound words, and modality. We validate our annotation through inter-annotator agreement and further evaluate our corpus with a comparison of structural divergences between EnAMR and DeAMR on parallel sentences, replicating previous work that finds both cases of cross-lingual structural alignment and cases of meaningful linguistic divergence. Finally, we fine-tune state-of-the-art multi-lingual and cross-lingual AMR parsers on our corpus and find that, while our small corpus is insufficient to produce quality output, there is a need to continue develop and evaluate against gold non-English AMR data.
[ "Otto, Christoph", "Groschwitz, Jonas", "Koller, Alex", "er", "Yang, Xiulin", "Donatelli, Lucia" ]
A Corpus of German Abstract Meaning Representation (DeAMR)
lrec-main.26
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.lrec-main.27.bib
https://aclanthology.org/2024.lrec-main.27/
@inproceedings{coulange-etal-2024-corpus, title = "A Corpus of Spontaneous {L}2 {E}nglish Speech for Real-situation Speaking Assessment", author = "Coulange, Sylvain and Fries, Marie-H{\'e}l{\`e}ne and Masperi, Monica and Rossato, Solange", editor = "Calzolari, Nicoletta and Kan, Min-Yen and Hoste, Veronique and Lenci, Alessandro and Sakti, Sakriani and Xue, Nianwen", booktitle = "Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)", month = may, year = "2024", address = "Torino, Italia", publisher = "ELRA and ICCL", url = "https://aclanthology.org/2024.lrec-main.27", pages = "293--297", abstract = "When assessing second language proficiency (L2), evaluation of spontaneous speech performance is crucial. This paper presents a corpus of spontaneous L2 English speech, focusing on the speech performance of B1 and B2 proficiency speakers. Two hundred and sixty university students were recorded during a speaking task as part of a French national certificate in English. This task entailed a 10-minute role-play among 2 or 3 candidates, arguing about a controversial topic, in order to reach a negotiated compromise. Each student{'}s performance was evaluated by two experts, categorizing them into B2, B1 or below B1 speaking proficiency levels. Automatic diarization, transcription, and alignment at the word level were performed on the recorded conversations, in order to analyse lexical stress realisation in polysyllabic plain words of B1 and B2 proficiency students. Results showed that only 35.4{\%} of the 6,350 targeted words had stress detected on the expected syllable, revealing a common stress shift to the final syllable. Besides a substantial inter-speaker variability (0{\%} to 68.4{\%}), B2 speakers demonstrated a slightly higher stress accuracy (36{\%}) compared to B1 speakers (29.6{\%}). Those with accurate stress placement utilized F0 and intensity to make syllable prominence, while speakers with lower accuracy tended to lengthen words on their last syllables, with minimal changes in other dimensions.", }
When assessing second language proficiency (L2), evaluation of spontaneous speech performance is crucial. This paper presents a corpus of spontaneous L2 English speech, focusing on the speech performance of B1 and B2 proficiency speakers. Two hundred and sixty university students were recorded during a speaking task as part of a French national certificate in English. This task entailed a 10-minute role-play among 2 or 3 candidates, arguing about a controversial topic, in order to reach a negotiated compromise. Each student{'}s performance was evaluated by two experts, categorizing them into B2, B1 or below B1 speaking proficiency levels. Automatic diarization, transcription, and alignment at the word level were performed on the recorded conversations, in order to analyse lexical stress realisation in polysyllabic plain words of B1 and B2 proficiency students. Results showed that only 35.4{\%} of the 6,350 targeted words had stress detected on the expected syllable, revealing a common stress shift to the final syllable. Besides a substantial inter-speaker variability (0{\%} to 68.4{\%}), B2 speakers demonstrated a slightly higher stress accuracy (36{\%}) compared to B1 speakers (29.6{\%}). Those with accurate stress placement utilized F0 and intensity to make syllable prominence, while speakers with lower accuracy tended to lengthen words on their last syllables, with minimal changes in other dimensions.
[ "Coulange, Sylvain", "Fries, Marie-H{\\'e}l{\\`e}ne", "Masperi, Monica", "Rossato, Solange" ]
A Corpus of Spontaneous L2 English Speech for Real-situation Speaking Assessment
lrec-main.27
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.lrec-main.28.bib
https://aclanthology.org/2024.lrec-main.28/
@inproceedings{tomar-etal-2024-action, title = "Action and Reaction Go Hand in Hand! a Multi-modal Dialogue Act Aided Sarcasm Identification", author = "Tomar, Mohit Singh and Saha, Tulika and Tiwari, Abhisek and Saha, Sriparna", editor = "Calzolari, Nicoletta and Kan, Min-Yen and Hoste, Veronique and Lenci, Alessandro and Sakti, Sakriani and Xue, Nianwen", booktitle = "Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)", month = may, year = "2024", address = "Torino, Italia", publisher = "ELRA and ICCL", url = "https://aclanthology.org/2024.lrec-main.28", pages = "298--309", abstract = "Sarcasm primarily involves saying something but {``}meaning the opposite{''} or {``}meaning something completely different{''} in order to convey a particular tone or mood. In both the above cases, the {``}meaning{''} is reflected by the communicative intention of the speaker, known as dialogue acts. In this paper, we seek to investigate a novel phenomenon of analyzing sarcasm in the context of dialogue acts with the hypothesis that the latter helps to understand the former better. Toward this aim, we extend the multi-modal MUStARD dataset to enclose dialogue acts for each dialogue. To demonstrate the utility of our hypothesis, we develop a dialogue act-aided multi-modal transformer network for sarcasm identification (MM-SARDAC), leveraging interrelation between these tasks. In addition, we introduce an order-infused, multi-modal infusion mechanism into our proposed model, which allows for a more intuitive combined modality representation by selectively focusing on relevant modalities in an ordered manner. Extensive empirical results indicate that dialogue act-aided sarcasm identification achieved better performance compared to performing sarcasm identification alone. The dataset and code are available at https://github.com/mohit2b/MM-SARDAC.", }
Sarcasm primarily involves saying something but {``}meaning the opposite{''} or {``}meaning something completely different{''} in order to convey a particular tone or mood. In both the above cases, the {``}meaning{''} is reflected by the communicative intention of the speaker, known as dialogue acts. In this paper, we seek to investigate a novel phenomenon of analyzing sarcasm in the context of dialogue acts with the hypothesis that the latter helps to understand the former better. Toward this aim, we extend the multi-modal MUStARD dataset to enclose dialogue acts for each dialogue. To demonstrate the utility of our hypothesis, we develop a dialogue act-aided multi-modal transformer network for sarcasm identification (MM-SARDAC), leveraging interrelation between these tasks. In addition, we introduce an order-infused, multi-modal infusion mechanism into our proposed model, which allows for a more intuitive combined modality representation by selectively focusing on relevant modalities in an ordered manner. Extensive empirical results indicate that dialogue act-aided sarcasm identification achieved better performance compared to performing sarcasm identification alone. The dataset and code are available at https://github.com/mohit2b/MM-SARDAC.
[ "Tomar, Mohit Singh", "Saha, Tulika", "Tiwari, Abhisek", "Saha, Sriparna" ]
Action and Reaction Go Hand in Hand! a Multi-modal Dialogue Act Aided Sarcasm Identification
lrec-main.28
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.lrec-main.29.bib
https://aclanthology.org/2024.lrec-main.29/
@inproceedings{yu-etal-2024-action, title = "Action-Concentrated Embedding Framework: This Is Your Captain Sign-tokening", author = "Yu, Hyunwook and Shin, Suhyeon and Heo, Junku and Shin, Hyuntaek and Kim, Hyosu and Kim, Mucheol", editor = "Calzolari, Nicoletta and Kan, Min-Yen and Hoste, Veronique and Lenci, Alessandro and Sakti, Sakriani and Xue, Nianwen", booktitle = "Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)", month = may, year = "2024", address = "Torino, Italia", publisher = "ELRA and ICCL", url = "https://aclanthology.org/2024.lrec-main.29", pages = "310--320", abstract = "Sign language is the primary communication medium for people who are deaf or have hearing loss. However, given the divergent range of sensory abilities of these individuals, there is a communication gap that needs to be addressed. In this paper, we present action-concentrated embedding (ACE), which is a novel sign token embedding framework. Additionally, to provide a more structured foundation for sign language analysis, we introduce a dedicated notation system tailored for sign language that endeavors to encapsulate the nuanced gestures and movements that are integral with sign communication. The proposed ACE approach tracks a signer{'}s actions based on human posture estimation. Tokenizing these actions and capturing the token embedding using a short-time Fourier transform encapsulates the time-based behavioral changes. Hence, ACE offers input embedding to translate sign language into natural language sentences. When tested against a disaster sign language dataset using automated machine translation measures, ACE notably surpasses prior research in terms of translation capabilities, improving the performance by up to 5.79{\%} for BLEU-4 and 5.46{\%} for ROUGE-L metric.", }
Sign language is the primary communication medium for people who are deaf or have hearing loss. However, given the divergent range of sensory abilities of these individuals, there is a communication gap that needs to be addressed. In this paper, we present action-concentrated embedding (ACE), which is a novel sign token embedding framework. Additionally, to provide a more structured foundation for sign language analysis, we introduce a dedicated notation system tailored for sign language that endeavors to encapsulate the nuanced gestures and movements that are integral with sign communication. The proposed ACE approach tracks a signer{'}s actions based on human posture estimation. Tokenizing these actions and capturing the token embedding using a short-time Fourier transform encapsulates the time-based behavioral changes. Hence, ACE offers input embedding to translate sign language into natural language sentences. When tested against a disaster sign language dataset using automated machine translation measures, ACE notably surpasses prior research in terms of translation capabilities, improving the performance by up to 5.79{\%} for BLEU-4 and 5.46{\%} for ROUGE-L metric.
[ "Yu, Hyunwook", "Shin, Suhyeon", "Heo, Junku", "Shin, Hyuntaek", "Kim, Hyosu", "Kim, Mucheol" ]
Action-Concentrated Embedding Framework: This Is Your Captain Sign-tokening
lrec-main.29
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.lrec-main.30.bib
https://aclanthology.org/2024.lrec-main.30/
@inproceedings{vacareanu-etal-2024-active, title = "Active Learning Design Choices for {NER} with Transformers", author = "Vacareanu, Robert and Noriega-Atala, Enrique and Hahn-Powell, Gus and Valenzuela-Escarcega, Marco A. and Surdeanu, Mihai", editor = "Calzolari, Nicoletta and Kan, Min-Yen and Hoste, Veronique and Lenci, Alessandro and Sakti, Sakriani and Xue, Nianwen", booktitle = "Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)", month = may, year = "2024", address = "Torino, Italia", publisher = "ELRA and ICCL", url = "https://aclanthology.org/2024.lrec-main.30", pages = "321--334", abstract = "We explore multiple important choices that have not been analyzed in conjunction regarding active learning for token classification using transformer networks. These choices are: (i) how to select what to annotate, (ii) decide whether to annotate entire sentences or smaller sentence fragments, (iii) how to train with incomplete annotations at token-level, and (iv) how to select the initial seed dataset. We explore whether annotating at sub-sentence level can translate to an improved downstream performance by considering two different sub-sentence annotation strategies: (i) entity-level, and (ii) token-level. These approaches result in some sentences being only partially annotated. To address this issue, we introduce and evaluate multiple strategies to deal with partially-annotated sentences during the training process. We show that annotating at the sub-sentence level achieves comparable or better performance than sentence-level annotations with a smaller number of annotated tokens. We then explore the extent to which the performance gap remains once accounting for the annotation time and found that both annotation schemes perform similarly.", }
We explore multiple important choices that have not been analyzed in conjunction regarding active learning for token classification using transformer networks. These choices are: (i) how to select what to annotate, (ii) decide whether to annotate entire sentences or smaller sentence fragments, (iii) how to train with incomplete annotations at token-level, and (iv) how to select the initial seed dataset. We explore whether annotating at sub-sentence level can translate to an improved downstream performance by considering two different sub-sentence annotation strategies: (i) entity-level, and (ii) token-level. These approaches result in some sentences being only partially annotated. To address this issue, we introduce and evaluate multiple strategies to deal with partially-annotated sentences during the training process. We show that annotating at the sub-sentence level achieves comparable or better performance than sentence-level annotations with a smaller number of annotated tokens. We then explore the extent to which the performance gap remains once accounting for the annotation time and found that both annotation schemes perform similarly.
[ "Vacareanu, Robert", "Noriega-Atala, Enrique", "Hahn-Powell, Gus", "Valenzuela-Escarcega, Marco A.", "Surdeanu, Mihai" ]
Active Learning Design Choices for NER with Transformers
lrec-main.30
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.lrec-main.31.bib
https://aclanthology.org/2024.lrec-main.31/
@inproceedings{palomar-giner-etal-2024-curated, title = "A {CURATE}d {CAT}alog: Rethinking the Extraction of Pretraining Corpora for Mid-Resourced Languages", author = "Palomar-Giner, Jorge and Saiz, Jose Javier and Espu{\~n}a, Ferran and Mina, Mario and Da Dalt, Severino and Llop, Joan and Ostendorff, Malte and Ortiz Suarez, Pedro and Rehm, Georg and Gonzalez-Agirre, Aitor and Villegas, Marta", editor = "Calzolari, Nicoletta and Kan, Min-Yen and Hoste, Veronique and Lenci, Alessandro and Sakti, Sakriani and Xue, Nianwen", booktitle = "Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)", month = may, year = "2024", address = "Torino, Italia", publisher = "ELRA and ICCL", url = "https://aclanthology.org/2024.lrec-main.31", pages = "335--349", abstract = "We present and describe two language resources in this paper: CATalog 1.0, the largest text corpus in Catalan to date, and CURATE (Corpus Utility for RAting TExt), a modular, parallelizable pipeline used for processing and scoring documents based on text quality that we have optimised to run in High Performance Cluster (HPC) environments. In the coming sections we describe our data preprocessing pipeline at length; traditional pipelines usually implement a set of binary filters such that a given document is either in or out. In our experience with Catalan, in lower-resource settings it is more practical to instead assign a document a soft score to allow for more flexible decision-making. We describe how the document score is calculated and highlight its interpretability by showing that it is significantly correlated with human judgements as obtained from a comparative judgement experiment. We additionally describe the different subcorpora that make up CATalog 1.0.", }
We present and describe two language resources in this paper: CATalog 1.0, the largest text corpus in Catalan to date, and CURATE (Corpus Utility for RAting TExt), a modular, parallelizable pipeline used for processing and scoring documents based on text quality that we have optimised to run in High Performance Cluster (HPC) environments. In the coming sections we describe our data preprocessing pipeline at length; traditional pipelines usually implement a set of binary filters such that a given document is either in or out. In our experience with Catalan, in lower-resource settings it is more practical to instead assign a document a soft score to allow for more flexible decision-making. We describe how the document score is calculated and highlight its interpretability by showing that it is significantly correlated with human judgements as obtained from a comparative judgement experiment. We additionally describe the different subcorpora that make up CATalog 1.0.
[ "Palomar-Giner, Jorge", "Saiz, Jose Javier", "Espu{\\~n}a, Ferran", "Mina, Mario", "Da Dalt, Severino", "Llop, Joan", "Ostendorff, Malte", "Ortiz Suarez, Pedro", "Rehm, Georg", "Gonzalez-Agirre, Aitor", "Villegas, Marta" ]
A CURATEd CATalog: Rethinking the Extraction of Pretraining Corpora for Mid-Resourced Languages
lrec-main.31
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.lrec-main.32.bib
https://aclanthology.org/2024.lrec-main.32/
@inproceedings{braga-etal-2024-adakron, title = "{A}da{K}ron: An Adapter-based Parameter Efficient Model Tuning with Kronecker Product", author = "Braga, Marco and Raganato, Alessandro and Pasi, Gabriella", editor = "Calzolari, Nicoletta and Kan, Min-Yen and Hoste, Veronique and Lenci, Alessandro and Sakti, Sakriani and Xue, Nianwen", booktitle = "Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)", month = may, year = "2024", address = "Torino, Italia", publisher = "ELRA and ICCL", url = "https://aclanthology.org/2024.lrec-main.32", pages = "350--357", abstract = "The fine-tuning paradigm has been widely adopted to train neural models tailored for specific tasks. However, the recent upsurge of Large Language Models (LLMs), characterized by billions of parameters, has introduced profound computational challenges to the fine-tuning process. This has fueled intensive research on Parameter-Efficient Fine-Tuning (PEFT) techniques, usually involving the training of a selective subset of the original model parameters. One of the most used approaches is Adapters, which add trainable lightweight layers to the existing pretrained weights. Within this context, we propose AdaKron, an Adapter-based fine-tuning with the Kronecker product. In particular, we leverage the Kronecker product to combine the output of two small networks, resulting in a final vector whose dimension is the product of the dimensions of the individual outputs, allowing us to train only 0.55{\%} of the model{'}s original parameters. We evaluate AdaKron performing a series of experiments on the General Language Understanding Evaluation (GLUE) benchmark, achieving results in the same ballpark as recent state-of-the-art PEFT methods, despite training fewer parameters.", }
The fine-tuning paradigm has been widely adopted to train neural models tailored for specific tasks. However, the recent upsurge of Large Language Models (LLMs), characterized by billions of parameters, has introduced profound computational challenges to the fine-tuning process. This has fueled intensive research on Parameter-Efficient Fine-Tuning (PEFT) techniques, usually involving the training of a selective subset of the original model parameters. One of the most used approaches is Adapters, which add trainable lightweight layers to the existing pretrained weights. Within this context, we propose AdaKron, an Adapter-based fine-tuning with the Kronecker product. In particular, we leverage the Kronecker product to combine the output of two small networks, resulting in a final vector whose dimension is the product of the dimensions of the individual outputs, allowing us to train only 0.55{\%} of the model{'}s original parameters. We evaluate AdaKron performing a series of experiments on the General Language Understanding Evaluation (GLUE) benchmark, achieving results in the same ballpark as recent state-of-the-art PEFT methods, despite training fewer parameters.
[ "Braga, Marco", "Raganato, Aless", "ro", "Pasi, Gabriella" ]
AdaKron: An Adapter-based Parameter Efficient Model Tuning with Kronecker Product
lrec-main.32
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.lrec-main.33.bib
https://aclanthology.org/2024.lrec-main.33/
@inproceedings{xu-etal-2024-adaptive, title = "Adaptive Reinforcement Tuning Language Models as Hard Data Generators for Sentence Representation", author = "Xu, Bo and Wu, Yifei and Wei, Shouang and Du, Ming and Wang, Hongya", editor = "Calzolari, Nicoletta and Kan, Min-Yen and Hoste, Veronique and Lenci, Alessandro and Sakti, Sakriani and Xue, Nianwen", booktitle = "Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)", month = may, year = "2024", address = "Torino, Italia", publisher = "ELRA and ICCL", url = "https://aclanthology.org/2024.lrec-main.33", pages = "358--371", abstract = "Sentence representation learning is a fundamental task in NLP. Existing methods use contrastive learning (CL) to learn effective sentence representations, which benefit from high-quality contrastive data but require extensive human annotation. Large language models (LLMs) like ChatGPT and GPT4 can automatically generate such data. However, this alternative strategy also encounters challenges: 1) obtaining high-quality generated data from small-parameter LLMs is difficult, and 2) inefficient utilization of the generated data. To address these challenges, we propose a novel adaptive reinforcement tuning (ART) framework. Specifically, to address the first challenge, we introduce a reinforcement learning approach for fine-tuning small-parameter LLMs, enabling the generation of high-quality hard contrastive data without human feedback. To address the second challenge, we propose an adaptive iterative framework to guide the small-parameter LLMs to generate progressively harder samples through multiple iterations, thereby maximizing the utility of generated data. Experiments conducted on seven semantic text similarity tasks demonstrate that the sentence representation models trained using the synthetic data generated by our proposed method achieve state-of-the-art performance. Our code is available at https://github.com/WuNein/AdaptCL.", }
Sentence representation learning is a fundamental task in NLP. Existing methods use contrastive learning (CL) to learn effective sentence representations, which benefit from high-quality contrastive data but require extensive human annotation. Large language models (LLMs) like ChatGPT and GPT4 can automatically generate such data. However, this alternative strategy also encounters challenges: 1) obtaining high-quality generated data from small-parameter LLMs is difficult, and 2) inefficient utilization of the generated data. To address these challenges, we propose a novel adaptive reinforcement tuning (ART) framework. Specifically, to address the first challenge, we introduce a reinforcement learning approach for fine-tuning small-parameter LLMs, enabling the generation of high-quality hard contrastive data without human feedback. To address the second challenge, we propose an adaptive iterative framework to guide the small-parameter LLMs to generate progressively harder samples through multiple iterations, thereby maximizing the utility of generated data. Experiments conducted on seven semantic text similarity tasks demonstrate that the sentence representation models trained using the synthetic data generated by our proposed method achieve state-of-the-art performance. Our code is available at https://github.com/WuNein/AdaptCL.
[ "Xu, Bo", "Wu, Yifei", "Wei, Shouang", "Du, Ming", "Wang, Hongya" ]
Adaptive Reinforcement Tuning Language Models as Hard Data Generators for Sentence Representation
lrec-main.33
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.lrec-main.34.bib
https://aclanthology.org/2024.lrec-main.34/
@inproceedings{sun-etal-2024-adaptive, title = "Adaptive Simultaneous Sign Language Translation with Confident Translation Length Estimation", author = "Sun, Tong and Fu, Biao and Hu, Cong and Zhang, Liang and Zhang, Ruiquan and Shi, Xiaodong and Su, Jinsong and Chen, Yidong", editor = "Calzolari, Nicoletta and Kan, Min-Yen and Hoste, Veronique and Lenci, Alessandro and Sakti, Sakriani and Xue, Nianwen", booktitle = "Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)", month = may, year = "2024", address = "Torino, Italia", publisher = "ELRA and ICCL", url = "https://aclanthology.org/2024.lrec-main.34", pages = "372--384", abstract = "Traditional non-simultaneous Sign Language Translation (SLT) methods, while effective for pre-recorded videos, face challenges in real-time scenarios due to inherent inference delays. The emerging field of simultaneous SLT aims to address this issue by progressively translating incrementally received sign video. However, the sole existing work in simultaneous SLT adopts a fixed gloss-based policy, which suffer from limitations in boundary prediction and contextual comprehension. In this paper, we delve deeper into this area and propose an adaptive policy for simultaneous SLT. Our approach introduces the concept of {``}confident translation length{''}, denoting maximum accurate translation achievable from current input. An estimator measures this length for streaming sign video, enabling the model to make informed decisions on whether to wait for more input or proceed with translation. To train the estimator, we construct a training data of confident translation length based on the longest common prefix between translations of partial and complete inputs. Furthermore, we incorporate adaptive training, utilizing pseudo prefix pairs, to refine the offline translation model for optimal performance in simultaneous scenarios. Experimental results on PHOENIX2014T and CSL-Daily demonstrate the superiority of our adaptive policy over existing methods, particularly excelling in situations requiring extremely low latency.", }
Traditional non-simultaneous Sign Language Translation (SLT) methods, while effective for pre-recorded videos, face challenges in real-time scenarios due to inherent inference delays. The emerging field of simultaneous SLT aims to address this issue by progressively translating incrementally received sign video. However, the sole existing work in simultaneous SLT adopts a fixed gloss-based policy, which suffer from limitations in boundary prediction and contextual comprehension. In this paper, we delve deeper into this area and propose an adaptive policy for simultaneous SLT. Our approach introduces the concept of {``}confident translation length{''}, denoting maximum accurate translation achievable from current input. An estimator measures this length for streaming sign video, enabling the model to make informed decisions on whether to wait for more input or proceed with translation. To train the estimator, we construct a training data of confident translation length based on the longest common prefix between translations of partial and complete inputs. Furthermore, we incorporate adaptive training, utilizing pseudo prefix pairs, to refine the offline translation model for optimal performance in simultaneous scenarios. Experimental results on PHOENIX2014T and CSL-Daily demonstrate the superiority of our adaptive policy over existing methods, particularly excelling in situations requiring extremely low latency.
[ "Sun, Tong", "Fu, Biao", "Hu, Cong", "Zhang, Liang", "Zhang, Ruiquan", "Shi, Xiaodong", "Su, Jinsong", "Chen, Yidong" ]
Adaptive Simultaneous Sign Language Translation with Confident Translation Length Estimation
lrec-main.34
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.lrec-main.35.bib
https://aclanthology.org/2024.lrec-main.35/
@inproceedings{blouin-etal-2024-dataset, title = "A Dataset for Named Entity Recognition and Entity Linking in {C}hinese Historical Newspapers", author = "Blouin, Baptiste and Armand, C{\'e}cile and Henriot, Christian", editor = "Calzolari, Nicoletta and Kan, Min-Yen and Hoste, Veronique and Lenci, Alessandro and Sakti, Sakriani and Xue, Nianwen", booktitle = "Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)", month = may, year = "2024", address = "Torino, Italia", publisher = "ELRA and ICCL", url = "https://aclanthology.org/2024.lrec-main.35", pages = "385--394", abstract = "In this study, we present a novel historical Chinese dataset for named entity recognition, entity linking, coreference and entity relations. We use data from Chinese newspapers from 1872 to 1949 and multilingual bibliographic resources from the same period. The period and the language are the main strength of the present work, offering a resource which covers different styles and language uses, as well as the largest historical Chinese NER dataset with manual annotations from this transitional period. After detailing the selection and annotation process, we present the very first results that can be obtained from this dataset. Texts and annotations are freely downloadable from the GitHub repository.", }
In this study, we present a novel historical Chinese dataset for named entity recognition, entity linking, coreference and entity relations. We use data from Chinese newspapers from 1872 to 1949 and multilingual bibliographic resources from the same period. The period and the language are the main strength of the present work, offering a resource which covers different styles and language uses, as well as the largest historical Chinese NER dataset with manual annotations from this transitional period. After detailing the selection and annotation process, we present the very first results that can be obtained from this dataset. Texts and annotations are freely downloadable from the GitHub repository.
[ "Blouin, Baptiste", "Arm", ", C{\\'e}cile", "Henriot, Christian" ]
A Dataset for Named Entity Recognition and Entity Linking in Chinese Historical Newspapers
lrec-main.35
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.lrec-main.36.bib
https://aclanthology.org/2024.lrec-main.36/
@inproceedings{raithel-etal-2024-dataset, title = "A Dataset for Pharmacovigilance in {G}erman, {F}rench, and {J}apanese: Annotating Adverse Drug Reactions across Languages", author = {Raithel, Lisa and Yeh, Hui-Syuan and Yada, Shuntaro and Grouin, Cyril and Lavergne, Thomas and N{\'e}v{\'e}ol, Aur{\'e}lie and Paroubek, Patrick and Thomas, Philippe and Nishiyama, Tomohiro and M{\"o}ller, Sebastian and Aramaki, Eiji and Matsumoto, Yuji and Roller, Roland and Zweigenbaum, Pierre}, editor = "Calzolari, Nicoletta and Kan, Min-Yen and Hoste, Veronique and Lenci, Alessandro and Sakti, Sakriani and Xue, Nianwen", booktitle = "Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)", month = may, year = "2024", address = "Torino, Italia", publisher = "ELRA and ICCL", url = "https://aclanthology.org/2024.lrec-main.36", pages = "395--414", abstract = "User-generated data sources have gained significance in uncovering Adverse Drug Reactions (ADRs), with an increasing number of discussions occurring in the digital world. However, the existing clinical corpora predominantly revolve around scientific articles in English. This work presents a multilingual corpus of texts concerning ADRs gathered from diverse sources, including patient fora, social media, and clinical reports in German, French, and Japanese. Our corpus contains annotations covering 12 entity types, four attribute types, and 13 relation types. It contributes to the development of real-world multilingual language models for healthcare. We provide statistics to highlight certain challenges associated with the corpus and conduct preliminary experiments resulting in strong baselines for extracting entities and relations between these entities, both within and across languages.", }
User-generated data sources have gained significance in uncovering Adverse Drug Reactions (ADRs), with an increasing number of discussions occurring in the digital world. However, the existing clinical corpora predominantly revolve around scientific articles in English. This work presents a multilingual corpus of texts concerning ADRs gathered from diverse sources, including patient fora, social media, and clinical reports in German, French, and Japanese. Our corpus contains annotations covering 12 entity types, four attribute types, and 13 relation types. It contributes to the development of real-world multilingual language models for healthcare. We provide statistics to highlight certain challenges associated with the corpus and conduct preliminary experiments resulting in strong baselines for extracting entities and relations between these entities, both within and across languages.
[ "Raithel, Lisa", "Yeh, Hui-Syuan", "Yada, Shuntaro", "Grouin, Cyril", "Lavergne, Thomas", "N{\\'e}v{\\'e}ol, Aur{\\'e}lie", "Paroubek, Patrick", "Thomas, Philippe", "Nishiyama, Tomohiro", "M{\\\"o}ller, Sebastian", "Aramaki, Eiji", "Matsumoto, Yuji", "Roller, Rol", "", "Zweigenbaum, Pierre" ]
A Dataset for Pharmacovigilance in German, French, and Japanese: Annotating Adverse Drug Reactions across Languages
lrec-main.36
Poster
2403.18336
[ "https://github.com/dotkat-dotcome/keepha-adr" ]
https://huggingface.co./papers/2403.18336
0
0
0
14
1
[]
[]
[]
https://aclanthology.org/2024.lrec-main.37.bib
https://aclanthology.org/2024.lrec-main.37/
@inproceedings{kumar-etal-2024-adding, title = "Adding {SPICE} to Life: Speaker Profiling in Multiparty Conversations", author = "Kumar, Shivani and Gupta, Rishabh and Akhtar, Md. Shad and Chakraborty, Tanmoy", editor = "Calzolari, Nicoletta and Kan, Min-Yen and Hoste, Veronique and Lenci, Alessandro and Sakti, Sakriani and Xue, Nianwen", booktitle = "Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)", month = may, year = "2024", address = "Torino, Italia", publisher = "ELRA and ICCL", url = "https://aclanthology.org/2024.lrec-main.37", pages = "415--425", abstract = "In the realm of conversational dynamics, individual idiosyncrasies challenge the suitability of a one-size-fits-all approach for dialogue agent responses. Prior studies often assumed the speaker{'}s persona{'}s immediate availability, a premise not universally applicable. To address this gap, we explore the Speaker Profiling in Conversations (SPC) task, aiming to synthesize persona attributes for each dialogue participant. SPC comprises three core subtasks: persona discovery, persona-type identification, and persona-value extraction. The first subtask identifies persona-related utterances, the second classifies specific attributes, and the third extracts precise values for the persona. To confront this multifaceted challenge, we{'}ve diligently compiled SPICE, an annotated dataset, underpinning our thorough evaluation of diverse baseline models. Additionally, we benchmark these findings against our innovative neural model, SPOT, presenting an exhaustive analysis encompassing a nuanced assessment of quantitative and qualitative merits and limitations.", }
In the realm of conversational dynamics, individual idiosyncrasies challenge the suitability of a one-size-fits-all approach for dialogue agent responses. Prior studies often assumed the speaker{'}s persona{'}s immediate availability, a premise not universally applicable. To address this gap, we explore the Speaker Profiling in Conversations (SPC) task, aiming to synthesize persona attributes for each dialogue participant. SPC comprises three core subtasks: persona discovery, persona-type identification, and persona-value extraction. The first subtask identifies persona-related utterances, the second classifies specific attributes, and the third extracts precise values for the persona. To confront this multifaceted challenge, we{'}ve diligently compiled SPICE, an annotated dataset, underpinning our thorough evaluation of diverse baseline models. Additionally, we benchmark these findings against our innovative neural model, SPOT, presenting an exhaustive analysis encompassing a nuanced assessment of quantitative and qualitative merits and limitations.
[ "Kumar, Shivani", "Gupta, Rishabh", "Akhtar, Md. Shad", "Chakraborty, Tanmoy" ]
Adding SPICE to Life: Speaker Profiling in Multiparty Conversations
lrec-main.37
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.lrec-main.38.bib
https://aclanthology.org/2024.lrec-main.38/
@inproceedings{hauptmann-etal-2024-adea, title = "{ADEA}: An Argumentative Dialogue Dataset on Ethical Issues Concerning Future {A}.{I}. Applications", author = "Hauptmann, Christian and Krenzer, Adrian and Krause, Antonia and Puppe, Frank", editor = "Calzolari, Nicoletta and Kan, Min-Yen and Hoste, Veronique and Lenci, Alessandro and Sakti, Sakriani and Xue, Nianwen", booktitle = "Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)", month = may, year = "2024", address = "Torino, Italia", publisher = "ELRA and ICCL", url = "https://aclanthology.org/2024.lrec-main.38", pages = "426--437", abstract = "Introducing ADEA: a German dataset that captures online dialogues and focuses on ethical issues related to future AI applications. This dataset, which includes over 2800 labeled user utterances on four different topics, is specifically designed for the training of chatbots that can navigate the complexities of real-world ethical AI conversations. The creation of these dialogues is the result of two carefully conducted studies in which university students interacted with an argumentative dialogue system. A fundamental part of our methodology is the use of German argument graphs. These graphs not only form the knowledge base of the dialogue system but also serve as an effective annotation scheme for the dialogues. Apart from the introduction of the dataset and the argument graphs, we provide a preliminary benchmark using GPT-4 via the OpenAI API. This provides researchers with a concrete reference point while demonstrating the potential of our dataset. We make our dataset and argument graphs available at https://github.com/HaupChris/ADEA-Dialogue-Dataset.", }
Introducing ADEA: a German dataset that captures online dialogues and focuses on ethical issues related to future AI applications. This dataset, which includes over 2800 labeled user utterances on four different topics, is specifically designed for the training of chatbots that can navigate the complexities of real-world ethical AI conversations. The creation of these dialogues is the result of two carefully conducted studies in which university students interacted with an argumentative dialogue system. A fundamental part of our methodology is the use of German argument graphs. These graphs not only form the knowledge base of the dialogue system but also serve as an effective annotation scheme for the dialogues. Apart from the introduction of the dataset and the argument graphs, we provide a preliminary benchmark using GPT-4 via the OpenAI API. This provides researchers with a concrete reference point while demonstrating the potential of our dataset. We make our dataset and argument graphs available at https://github.com/HaupChris/ADEA-Dialogue-Dataset.
[ "Hauptmann, Christian", "Krenzer, Adrian", "Krause, Antonia", "Puppe, Frank" ]
ADEA: An Argumentative Dialogue Dataset on Ethical Issues Concerning Future A.I. Applications
lrec-main.38
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.lrec-main.39.bib
https://aclanthology.org/2024.lrec-main.39/
@inproceedings{turki-etal-2024-decade, title = "A Decade of Scholarly Research on Open Knowledge Graphs", author = "Turki, Houcemeddine and Owodunni, Abraham Toluwase and Hadj Taieb, Mohamed Ali and Bile, Ren{\'e} Fabrice and Ben Aouicha, Mohamed", editor = "Calzolari, Nicoletta and Kan, Min-Yen and Hoste, Veronique and Lenci, Alessandro and Sakti, Sakriani and Xue, Nianwen", booktitle = "Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)", month = may, year = "2024", address = "Torino, Italia", publisher = "ELRA and ICCL", url = "https://aclanthology.org/2024.lrec-main.39", pages = "438--448", abstract = "The proliferation of open knowledge graphs has led to a surge in scholarly research on the topic over the past decade. This paper presents a bibliometric analysis of the scholarly literature on open knowledge graphs published between 2013 and 2023. The study aims to identify the trends, patterns, and impact of research in this field, as well as the key topics and research questions that have emerged. The work uses bibliometric techniques to analyze a sample of 4445 scholarly articles retrieved from Scopus. The findings reveal an ever-increasing number of publications on open knowledge graphs published every year, particularly in developed countries (+50 per year). These outputs are published in highly-referred scholarly journals and conferences. The study identifies three main research themes: (1) knowledge graph construction and enrichment, (2) evaluation and reuse, and (3) fusion of knowledge graphs into NLP systems. Within these themes, the study identifies specific tasks that have received considerable attention, including entity linking, knowledge graph embedding, and graph neural networks.", }
The proliferation of open knowledge graphs has led to a surge in scholarly research on the topic over the past decade. This paper presents a bibliometric analysis of the scholarly literature on open knowledge graphs published between 2013 and 2023. The study aims to identify the trends, patterns, and impact of research in this field, as well as the key topics and research questions that have emerged. The work uses bibliometric techniques to analyze a sample of 4445 scholarly articles retrieved from Scopus. The findings reveal an ever-increasing number of publications on open knowledge graphs published every year, particularly in developed countries (+50 per year). These outputs are published in highly-referred scholarly journals and conferences. The study identifies three main research themes: (1) knowledge graph construction and enrichment, (2) evaluation and reuse, and (3) fusion of knowledge graphs into NLP systems. Within these themes, the study identifies specific tasks that have received considerable attention, including entity linking, knowledge graph embedding, and graph neural networks.
[ "Turki, Houcemeddine", "Owodunni, Abraham Toluwase", "Hadj Taieb, Mohamed Ali", "Bile, Ren{\\'e} Fabrice", "Ben Aouicha, Mohamed" ]
A Decade of Scholarly Research on Open Knowledge Graphs
lrec-main.39
Poster
2306.13186
[ "https://github.com/data-engineering-and-semantics/openkgbiblio" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.lrec-main.40.bib
https://aclanthology.org/2024.lrec-main.40/
@inproceedings{thayaparan-etal-2024-differentiable, title = "A Differentiable Integer Linear Programming Solver for Explanation-Based Natural Language Inference", author = "Thayaparan, Mokanarangan and Valentino, Marco and Freitas, Andr{\'e}", editor = "Calzolari, Nicoletta and Kan, Min-Yen and Hoste, Veronique and Lenci, Alessandro and Sakti, Sakriani and Xue, Nianwen", booktitle = "Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)", month = may, year = "2024", address = "Torino, Italia", publisher = "ELRA and ICCL", url = "https://aclanthology.org/2024.lrec-main.40", pages = "449--458", abstract = "Integer Linear Programming (ILP) has been proposed as a formalism for encoding precise structural and semantic constraints for Natural Language Inference (NLI). However, traditional ILP frameworks are non-differentiable, posing critical challenges for the integration of continuous language representations based on deep learning. In this paper, we introduce a novel approach, named Diff-Comb Explainer, a neuro-symbolic architecture for explanation-based NLI based on Differentiable BlackBox Combinatorial Solvers (DBCS). Differently from existing neuro-symbolic solvers, Diff-Comb Explainer does not necessitate a continuous relaxation of the semantic constraints, enabling a direct, more precise, and efficient incorporation of neural representations into the ILP formulation. Our experiments demonstrate that Diff-Comb Explainer achieves superior performance when compared to conventional ILP solvers, neuro-symbolic black-box solvers, and Transformer-based encoders. Moreover, a deeper analysis reveals that Diff-Comb Explainer can significantly improve the precision, consistency, and faithfulness of the constructed explanations, opening new opportunities for research on neuro-symbolic architectures for explainable and transparent NLI in complex domains.", }
Integer Linear Programming (ILP) has been proposed as a formalism for encoding precise structural and semantic constraints for Natural Language Inference (NLI). However, traditional ILP frameworks are non-differentiable, posing critical challenges for the integration of continuous language representations based on deep learning. In this paper, we introduce a novel approach, named Diff-Comb Explainer, a neuro-symbolic architecture for explanation-based NLI based on Differentiable BlackBox Combinatorial Solvers (DBCS). Differently from existing neuro-symbolic solvers, Diff-Comb Explainer does not necessitate a continuous relaxation of the semantic constraints, enabling a direct, more precise, and efficient incorporation of neural representations into the ILP formulation. Our experiments demonstrate that Diff-Comb Explainer achieves superior performance when compared to conventional ILP solvers, neuro-symbolic black-box solvers, and Transformer-based encoders. Moreover, a deeper analysis reveals that Diff-Comb Explainer can significantly improve the precision, consistency, and faithfulness of the constructed explanations, opening new opportunities for research on neuro-symbolic architectures for explainable and transparent NLI in complex domains.
[ "Thayaparan, Mokanarangan", "Valentino, Marco", "Freitas, Andr{\\'e}" ]
A Differentiable Integer Linear Programming Solver for Explanation-Based Natural Language Inference
lrec-main.40
Poster
2404.02625
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.lrec-main.41.bib
https://aclanthology.org/2024.lrec-main.41/
@inproceedings{nagai-etal-2024-document, title = "A Document-Level Text Simplification Dataset for {J}apanese", author = "Nagai, Yoshinari and Oka, Teruaki and Komachi, Mamoru", editor = "Calzolari, Nicoletta and Kan, Min-Yen and Hoste, Veronique and Lenci, Alessandro and Sakti, Sakriani and Xue, Nianwen", booktitle = "Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)", month = may, year = "2024", address = "Torino, Italia", publisher = "ELRA and ICCL", url = "https://aclanthology.org/2024.lrec-main.41", pages = "459--476", abstract = "Document-level text simplification, a task that combines single-document summarization and intra-sentence simplification, has garnered significant attention. However, studies have primarily focused on languages such as English and German, leaving Japanese and similar languages underexplored because of a scarcity of linguistic resources. In this study, we devised JADOS, the first Japanese document-level text simplification dataset based on newspaper articles and Wikipedia. Our dataset focuses on simplification, to enhance readability by reducing the number of sentences and tokens in a document. We conducted investigations using our dataset. Firstly, we analyzed the characteristics of Japanese simplification by comparing it across different domains and with English counterparts. Moreover, we experimentally evaluated the performances of text summarization methods, transformer-based text simplification models, and large language models. In terms of D-SARI scores, the transformer-based models performed best across all domains. Finally, we manually evaluated several model outputs and target articles, demonstrating the need for document-level text simplification models in Japanese.", }
Document-level text simplification, a task that combines single-document summarization and intra-sentence simplification, has garnered significant attention. However, studies have primarily focused on languages such as English and German, leaving Japanese and similar languages underexplored because of a scarcity of linguistic resources. In this study, we devised JADOS, the first Japanese document-level text simplification dataset based on newspaper articles and Wikipedia. Our dataset focuses on simplification, to enhance readability by reducing the number of sentences and tokens in a document. We conducted investigations using our dataset. Firstly, we analyzed the characteristics of Japanese simplification by comparing it across different domains and with English counterparts. Moreover, we experimentally evaluated the performances of text summarization methods, transformer-based text simplification models, and large language models. In terms of D-SARI scores, the transformer-based models performed best across all domains. Finally, we manually evaluated several model outputs and target articles, demonstrating the need for document-level text simplification models in Japanese.
[ "Nagai, Yoshinari", "Oka, Teruaki", "Komachi, Mamoru" ]
A Document-Level Text Simplification Dataset for Japanese
lrec-main.41
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.lrec-main.42.bib
https://aclanthology.org/2024.lrec-main.42/
@inproceedings{han-etal-2024-dual, title = "A Dual-View Approach to Classifying Radiology Reports by Co-Training", author = "Han, Yutong and Yuan, Yan and Mou, Lili", editor = "Calzolari, Nicoletta and Kan, Min-Yen and Hoste, Veronique and Lenci, Alessandro and Sakti, Sakriani and Xue, Nianwen", booktitle = "Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)", month = may, year = "2024", address = "Torino, Italia", publisher = "ELRA and ICCL", url = "https://aclanthology.org/2024.lrec-main.42", pages = "477--483", abstract = "Radiology report analysis provides valuable information that can aid with public health initiatives, and has been attracting increasing attention from the research community. In this work, we present a novel insight that the structure of a radiology report (namely, the Findings and Impression sections) offers different views of a radiology scan. Based on this intuition, we further propose a co-training approach, where two machine learning models are built upon the Findings and Impression sections, respectively, and use each other{'}s information to boost performance with massive unlabeled data in a semi-supervised manner. We conducted experiments in a public health surveillance study, and results show that our co-training approach is able to improve performance using the dual views and surpass competing supervised and semi-supervised methods.", }
Radiology report analysis provides valuable information that can aid with public health initiatives, and has been attracting increasing attention from the research community. In this work, we present a novel insight that the structure of a radiology report (namely, the Findings and Impression sections) offers different views of a radiology scan. Based on this intuition, we further propose a co-training approach, where two machine learning models are built upon the Findings and Impression sections, respectively, and use each other{'}s information to boost performance with massive unlabeled data in a semi-supervised manner. We conducted experiments in a public health surveillance study, and results show that our co-training approach is able to improve performance using the dual views and surpass competing supervised and semi-supervised methods.
[ "Han, Yutong", "Yuan, Yan", "Mou, Lili" ]
A Dual-View Approach to Classifying Radiology Reports by Co-Training
lrec-main.42
Poster
2406.05995
[ "https://github.com/manga-uofa/radiology-cotrain" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.lrec-main.43.bib
https://aclanthology.org/2024.lrec-main.43/
@inproceedings{lee-etal-2024-advancing, title = "Advancing Semi-Supervised Learning for Automatic Post-Editing: Data-Synthesis by Mask-Infilling with Erroneous Terms", author = "Lee, Wonkee and Heo, Seong-Hwan and Lee, Jong-Hyeok", editor = "Calzolari, Nicoletta and Kan, Min-Yen and Hoste, Veronique and Lenci, Alessandro and Sakti, Sakriani and Xue, Nianwen", booktitle = "Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)", month = may, year = "2024", address = "Torino, Italia", publisher = "ELRA and ICCL", url = "https://aclanthology.org/2024.lrec-main.43", pages = "484--494", abstract = "Semi-supervised learning that leverages synthetic data for training has been widely adopted for developing automatic post-editing (APE) models due to the lack of training data. With this aim, we focus on data-synthesis methods to create high-quality synthetic data. Given that APE takes as input a machine-translation result that might include errors, we present a data-synthesis method by which the resulting synthetic data mimic the translation errors found in actual data. We introduce a noising-based data-synthesis method by adapting the masked language model approach, generating a noisy text from a clean text by infilling masked tokens with erroneous tokens. Moreover, we propose selective corpus interleaving that combines two separate synthetic datasets by taking only the advantageous samples to enhance the quality of the synthetic data further. Experimental results show that using the synthetic data created by our approach results in significantly better APE performance than other synthetic data created by existing methods.", }
Semi-supervised learning that leverages synthetic data for training has been widely adopted for developing automatic post-editing (APE) models due to the lack of training data. With this aim, we focus on data-synthesis methods to create high-quality synthetic data. Given that APE takes as input a machine-translation result that might include errors, we present a data-synthesis method by which the resulting synthetic data mimic the translation errors found in actual data. We introduce a noising-based data-synthesis method by adapting the masked language model approach, generating a noisy text from a clean text by infilling masked tokens with erroneous tokens. Moreover, we propose selective corpus interleaving that combines two separate synthetic datasets by taking only the advantageous samples to enhance the quality of the synthetic data further. Experimental results show that using the synthetic data created by our approach results in significantly better APE performance than other synthetic data created by existing methods.
[ "Lee, Wonkee", "Heo, Seong-Hwan", "Lee, Jong-Hyeok" ]
Advancing Semi-Supervised Learning for Automatic Post-Editing: Data-Synthesis by Mask-Infilling with Erroneous Terms
lrec-main.43
Poster
2204.03896
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.lrec-main.44.bib
https://aclanthology.org/2024.lrec-main.44/
@inproceedings{jiang-etal-2024-advancing, title = "Advancing Topic Segmentation and Outline Generation in {C}hinese Texts: The Paragraph-level Topic Representation, Corpus, and Benchmark", author = "Jiang, Feng and Liu, Weihao and Chu, Xiaomin and Li, Peifeng and Zhu, Qiaoming and Li, Haizhou", editor = "Calzolari, Nicoletta and Kan, Min-Yen and Hoste, Veronique and Lenci, Alessandro and Sakti, Sakriani and Xue, Nianwen", booktitle = "Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)", month = may, year = "2024", address = "Torino, Italia", publisher = "ELRA and ICCL", url = "https://aclanthology.org/2024.lrec-main.44", pages = "495--506", abstract = "Topic segmentation and outline generation strive to divide a document into coherent topic sections and generate corresponding subheadings, unveiling the discourse topic structure of a document. Compared with sentence-level topic structure, the paragraph-level topic structure can quickly grasp and understand the overall context of the document from a higher level, benefitting many downstream tasks such as summarization, discourse parsing, and information retrieval. However, the lack of large-scale, high-quality Chinese paragraph-level topic structure corpora restrained relative research and applications. To fill this gap, we build the Chinese paragraph-level topic representation, corpus, and benchmark in this paper. Firstly, we propose a hierarchical paragraph-level topic structure representation with three layers to guide the corpus construction. Then, we employ a two-stage man-machine collaborative annotation method to construct the largest Chinese Paragraph-level Topic Structure corpus (CPTS), achieving high quality. We also build several strong baselines, including ChatGPT, to validate the computability of CPTS on two fundamental tasks (topic segmentation and outline generation) and preliminarily verified its usefulness for the downstream task (discourse parsing).", }
Topic segmentation and outline generation strive to divide a document into coherent topic sections and generate corresponding subheadings, unveiling the discourse topic structure of a document. Compared with sentence-level topic structure, the paragraph-level topic structure can quickly grasp and understand the overall context of the document from a higher level, benefitting many downstream tasks such as summarization, discourse parsing, and information retrieval. However, the lack of large-scale, high-quality Chinese paragraph-level topic structure corpora restrained relative research and applications. To fill this gap, we build the Chinese paragraph-level topic representation, corpus, and benchmark in this paper. Firstly, we propose a hierarchical paragraph-level topic structure representation with three layers to guide the corpus construction. Then, we employ a two-stage man-machine collaborative annotation method to construct the largest Chinese Paragraph-level Topic Structure corpus (CPTS), achieving high quality. We also build several strong baselines, including ChatGPT, to validate the computability of CPTS on two fundamental tasks (topic segmentation and outline generation) and preliminarily verified its usefulness for the downstream task (discourse parsing).
[ "Jiang, Feng", "Liu, Weihao", "Chu, Xiaomin", "Li, Peifeng", "Zhu, Qiaoming", "Li, Haizhou" ]
Advancing Topic Segmentation and Outline Generation in Chinese Texts: The Paragraph-level Topic Representation, Corpus, and Benchmark
lrec-main.44
Poster
2305.14790
[ "https://github.com/fjiangai/cpts" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.lrec-main.45.bib
https://aclanthology.org/2024.lrec-main.45/
@inproceedings{zmitrovich-etal-2024-family, title = "A Family of Pretrained Transformer Language Models for {R}ussian", author = "Zmitrovich, Dmitry and Abramov, Aleksandr and Kalmykov, Andrey and Kadulin, Vitaly and Tikhonova, Maria and Taktasheva, Ekaterina and Astafurov, Danil and Baushenko, Mark and Snegirev, Artem and Shavrina, Tatiana and Markov, Sergei S. and Mikhailov, Vladislav and Fenogenova, Alena", editor = "Calzolari, Nicoletta and Kan, Min-Yen and Hoste, Veronique and Lenci, Alessandro and Sakti, Sakriani and Xue, Nianwen", booktitle = "Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)", month = may, year = "2024", address = "Torino, Italia", publisher = "ELRA and ICCL", url = "https://aclanthology.org/2024.lrec-main.45", pages = "507--524", abstract = "Transformer language models (LMs) are fundamental to NLP research methodologies and applications in various languages. However, developing such models specifically for the Russian language has received little attention. This paper introduces a collection of 13 Russian Transformer LMs, which spans encoder (ruBERT, ruRoBERTa, ruELECTRA), decoder (ruGPT-3), and encoder-decoder (ruT5, FRED-T5) architectures. We provide a report on the model architecture design and pretraining, and the results of evaluating their generalization abilities on Russian language understanding and generation datasets and benchmarks. By pretraining and releasing these specialized Transformer LMs, we aim to broaden the scope of the NLP research directions and enable the development of industrial solutions for the Russian language.", }
Transformer language models (LMs) are fundamental to NLP research methodologies and applications in various languages. However, developing such models specifically for the Russian language has received little attention. This paper introduces a collection of 13 Russian Transformer LMs, which spans encoder (ruBERT, ruRoBERTa, ruELECTRA), decoder (ruGPT-3), and encoder-decoder (ruT5, FRED-T5) architectures. We provide a report on the model architecture design and pretraining, and the results of evaluating their generalization abilities on Russian language understanding and generation datasets and benchmarks. By pretraining and releasing these specialized Transformer LMs, we aim to broaden the scope of the NLP research directions and enable the development of industrial solutions for the Russian language.
[ "Zmitrovich, Dmitry", "Abramov, Aleks", "r", "Kalmykov, Andrey", "Kadulin, Vitaly", "Tikhonova, Maria", "Taktasheva, Ekaterina", "Astafurov, Danil", "Baushenko, Mark", "Snegirev, Artem", "Shavrina, Tatiana", "Markov, Sergei S.", "Mikhailov, Vladislav", "Fenogenova, Alena" ]
A Family of Pretrained Transformer Language Models for Russian
lrec-main.45
Poster
2309.10931
[ "" ]
https://huggingface.co./papers/2309.10931
2
3
0
12
1
[ "ai-forever/rugpt3large_based_on_gpt2", "ai-forever/FRED-T5-1.7B", "ai-forever/ruRoberta-large", "ai-forever/ruT5-large", "ai-forever/rugpt3small_based_on_gpt2", "ai-forever/ruBert-base", "ai-forever/FRED-T5-large", "ai-forever/rugpt3medium_based_on_gpt2", "ai-forever/ruT5-base", "ai-forever/ruBert-large", "ai-forever/ruElectra-small", "ai-forever/ruElectra-medium", "ai-forever/ruElectra-large", "DFofanov78/rugpt3small_based_on_gpt2", "DFofanov78/rugpt3large_based_on_gpt2", "DFofanov78/rugpt3medium_based_on_gpt2", "Gnider/model_old_working" ]
[]
[ "open-llm-leaderboard/open_llm_leaderboard", "Intel/low_bit_open_llm_leaderboard", "BAAI/open_cn_llm_leaderboard", "gsaivinay/open_llm_leaderboard", "GTBench/GTBench", "big-kek/NeuroKorzh", "felixz/open_llm_leaderboard", "OPTML-Group/UnlearnCanvas-Benchmark", "Vikhrmodels/small-shlepa-lb", "AlexWortega/ruImageCaptionong", "end000/sberbank-ai-FRED-T5-1.7B", "b1sheng/kg_llm_leaderboard_test", "fkonovalenko/llm4career", "AlekseyKorshuk/rugpt3", "kllmagn/sberbank-ai-rugpt3large_based_on_gpt2", "kamakepar/sberbank-ai-rugpt3large", "kamakepar/sberbank-ai-rugpt3large_based_on_gpt2", "neubla/neubla-llm-evaluation-board", "rodrigomasini/data_only_open_llm_leaderboard", "Docfile/open_llm_leaderboard", "alGOriTM207/Ru_DialoModel", "MesonWarrior/vk", "Anonumous/RuImageCaptioning", "AntNikYab/NaturalLanguageProcessing", "jeydipak/nlp_project", "Andrew3875/ai-forever-FRED-T5-large", "orzhan/ruatd", "Lowgreatahm/ai-forever-ruRoberta-large", "yturkunov/finRecommender", "4eJIoBek/ruGPT3-Large", "Crits/ai-forever-rugpt3large_based_on_gpt2", "Heleg/pt2", "dokster/vqa-analysis", "eteron/Dialogue_assistant", "smothiki/open_llm_leaderboard", "pngwn/open_llm_leaderboard", "ultrin/nameform-ru", "pngwn/open_llm_leaderboard_two", "choco9966/LeaderboardTest", "0x1668/open_llm_leaderboard", "pngwn/open_llm_leaderboard-check", "R-uslan/GPTRUS", "asir0z/open_llm_leaderboard", "kbmlcoding/open_llm_leaderboard_free", "choco9966/open-ko-llm-leaderboard", "uhygfd/GPT2", "aichampions/open_llm_leaderboard", "Adeco/open_llm_leaderboard", "anirudh937/open_llm_leaderboard", "smothiki/open_llm_leaderboard2", "4eJIoBek/ruGPT3-Medium", "Jorj2064/gpt_chat_1234", "Jorj2064/kge_bot", "loveto/shad_transformers", "A1ex1/text-generation", "4eJIoBek/ruGPT3-Small", "Serg4451D/RuGPT", "vasevooo/NLP_project", "SaviAnna/history_mistery", "Ilvir/ilva", "vvv-knyazeva/NLP_project", "SaviAnna/History", "ruslanruslanruslan/nlp_project", "Vladislawoo/nlp-gpt-team", "alizhgir/ds-prj-10-w", "RMakushkin/test_space", "HaggiVaggi/nlp_project", "derat0r/derat0r_test_space", "DuckyPolice/ChatStormAI", "Norgan97/forjobtwo", "Ivan1579/ai-forever-rugpt3small_based_on_gpt2", "IvT-DS/nlp_proj", "Maslov-Artem/nlp_proj", "Shchushch/CV", "Unavailable1/Adaptive_reading_assessment", "Solar-Iz/ds-prj-10-w", "ds-meteors/nlp-lstm-team", "NLPLSTMteam/NLP_LSTM_team", "ElbrusGPT/elbrus_text_project", "Andriano2323/NLP_LSTM_team", "Kdnv/nlp_project", "Seppukku/nlp_project_gpt_team" ]
https://aclanthology.org/2024.lrec-main.46.bib
https://aclanthology.org/2024.lrec-main.46/
@inproceedings{tao-etal-2024-fast, title = "A Fast and High-quality Text-to-Speech Method with Compressed Auxiliary Corpus and Limited Target Speaker Corpus", author = "Tao, Ye and Lu, Chaofeng and Liu, Meng and Xu, Kai and Liu, Tianyu and Tian, Yunlong and Du, Yongjie", editor = "Calzolari, Nicoletta and Kan, Min-Yen and Hoste, Veronique and Lenci, Alessandro and Sakti, Sakriani and Xue, Nianwen", booktitle = "Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)", month = may, year = "2024", address = "Torino, Italia", publisher = "ELRA and ICCL", url = "https://aclanthology.org/2024.lrec-main.46", pages = "525--535", abstract = "With an auxiliary corpus (non-target speaker corpus) for model pre-training, Text-to-Speech (TTS) methods can generate high-quality speech with a limited target speaker corpus. However, this approach comes with expensive training costs. To overcome the challenge, a high-quality TTS method is proposed, significantly reducing training costs while maintaining the naturalness of synthesized speech. In this paper, we propose an auxiliary corpus compression algorithm that reduces the training cost while the naturalness of the synthesized speech is not significantly degraded. We then use the compressed corpus to pre-train the proposed TTS model CMDTTS, which fuses phoneme and word multi-level prosody modeling components and denoises the generated mel-spectrograms using denoising diffusion probabilistic models (DDPMs). In addition, a fine-tuning step that the conditional generative adversarial network (cGAN) is introduced to embed the target speaker feature and improve speech quality using the target speaker corpus. Experiments are conducted on Chinese and English single speaker{'}s corpora, and the results show that the method effectively balances the model training speed and the synthesized speech quality and outperforms the current models.", }
With an auxiliary corpus (non-target speaker corpus) for model pre-training, Text-to-Speech (TTS) methods can generate high-quality speech with a limited target speaker corpus. However, this approach comes with expensive training costs. To overcome the challenge, a high-quality TTS method is proposed, significantly reducing training costs while maintaining the naturalness of synthesized speech. In this paper, we propose an auxiliary corpus compression algorithm that reduces the training cost while the naturalness of the synthesized speech is not significantly degraded. We then use the compressed corpus to pre-train the proposed TTS model CMDTTS, which fuses phoneme and word multi-level prosody modeling components and denoises the generated mel-spectrograms using denoising diffusion probabilistic models (DDPMs). In addition, a fine-tuning step that the conditional generative adversarial network (cGAN) is introduced to embed the target speaker feature and improve speech quality using the target speaker corpus. Experiments are conducted on Chinese and English single speaker{'}s corpora, and the results show that the method effectively balances the model training speed and the synthesized speech quality and outperforms the current models.
[ "Tao, Ye", "Lu, Chaofeng", "Liu, Meng", "Xu, Kai", "Liu, Tianyu", "Tian, Yunlong", "Du, Yongjie" ]
A Fast and High-quality Text-to-Speech Method with Compressed Auxiliary Corpus and Limited Target Speaker Corpus
lrec-main.46
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.lrec-main.47.bib
https://aclanthology.org/2024.lrec-main.47/
@inproceedings{yang-etal-2024-frustratingly, title = "A Frustratingly Simple Decoding Method for Neural Text Generation", author = "Yang, Haoran and Cai, Deng and Li, Huayang and Bi, Wei and Lam, Wai and Shi, Shuming", editor = "Calzolari, Nicoletta and Kan, Min-Yen and Hoste, Veronique and Lenci, Alessandro and Sakti, Sakriani and Xue, Nianwen", booktitle = "Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)", month = may, year = "2024", address = "Torino, Italia", publisher = "ELRA and ICCL", url = "https://aclanthology.org/2024.lrec-main.47", pages = "536--557", abstract = "We introduce a frustratingly simple, highly efficient, and surprisingly effective decoding method, termed Frustratingly Simple Decoding (FSD), for neural text generation. The idea behind FSD is straightforward: We construct an anti-language model (anti-LM) based on previously generated text, which is employed to penalize the future generation of repetitive content. The anti-LM can be implemented as simple as an n-gram language model or a vectorized variant. In this way, FSD incurs no additional model parameters and negligible computational overhead (FSD can be as fast as greedy search). Despite its simplicity, FSD is surprisingly effective and generalizes across different datasets, models, and languages. Extensive experiments show that FSD outperforms established strong baselines in terms of generation quality, decoding speed, and universality.", }
We introduce a frustratingly simple, highly efficient, and surprisingly effective decoding method, termed Frustratingly Simple Decoding (FSD), for neural text generation. The idea behind FSD is straightforward: We construct an anti-language model (anti-LM) based on previously generated text, which is employed to penalize the future generation of repetitive content. The anti-LM can be implemented as simple as an n-gram language model or a vectorized variant. In this way, FSD incurs no additional model parameters and negligible computational overhead (FSD can be as fast as greedy search). Despite its simplicity, FSD is surprisingly effective and generalizes across different datasets, models, and languages. Extensive experiments show that FSD outperforms established strong baselines in terms of generation quality, decoding speed, and universality.
[ "Yang, Haoran", "Cai, Deng", "Li, Huayang", "Bi, Wei", "Lam, Wai", "Shi, Shuming" ]
A Frustratingly Simple Decoding Method for Neural Text Generation
lrec-main.47
Poster
2305.12675
[ "https://github.com/lhryang/fsd" ]
https://huggingface.co./papers/2305.12675
0
0
0
6
1
[]
[]
[]
https://aclanthology.org/2024.lrec-main.48.bib
https://aclanthology.org/2024.lrec-main.48/
@inproceedings{inadumi-etal-2024-gaze, title = "A Gaze-grounded Visual Question Answering Dataset for Clarifying Ambiguous {J}apanese Questions", author = "Inadumi, Shun and Kawano, Seiya and Yuguchi, Akishige and Kawanishi, Yasutomo and Yoshino, Koichiro", editor = "Calzolari, Nicoletta and Kan, Min-Yen and Hoste, Veronique and Lenci, Alessandro and Sakti, Sakriani and Xue, Nianwen", booktitle = "Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)", month = may, year = "2024", address = "Torino, Italia", publisher = "ELRA and ICCL", url = "https://aclanthology.org/2024.lrec-main.48", pages = "558--571", abstract = "Situated conversations, which refer to visual information as visual question answering (VQA), often contain ambiguities caused by reliance on directive information. This problem is exacerbated because some languages, such as Japanese, often omit subjective or objective terms. Such ambiguities in questions are often clarified by the contexts in conversational situations, such as joint attention with a user or user gaze information. In this study, we propose the Gaze-grounded VQA dataset (GazeVQA) that clarifies ambiguous questions using gaze information by focusing on a clarification process complemented by gaze information. We also propose a method that utilizes gaze target estimation results to improve the accuracy of GazeVQA tasks. Our experimental results showed that the proposed method improved the performance in some cases of a VQA system on GazeVQA and identified some typical problems of GazeVQA tasks that need to be improved.", }
Situated conversations, which refer to visual information as visual question answering (VQA), often contain ambiguities caused by reliance on directive information. This problem is exacerbated because some languages, such as Japanese, often omit subjective or objective terms. Such ambiguities in questions are often clarified by the contexts in conversational situations, such as joint attention with a user or user gaze information. In this study, we propose the Gaze-grounded VQA dataset (GazeVQA) that clarifies ambiguous questions using gaze information by focusing on a clarification process complemented by gaze information. We also propose a method that utilizes gaze target estimation results to improve the accuracy of GazeVQA tasks. Our experimental results showed that the proposed method improved the performance in some cases of a VQA system on GazeVQA and identified some typical problems of GazeVQA tasks that need to be improved.
[ "Inadumi, Shun", "Kawano, Seiya", "Yuguchi, Akishige", "Kawanishi, Yasutomo", "Yoshino, Koichiro" ]
A Gaze-grounded Visual Question Answering Dataset for Clarifying Ambiguous Japanese Questions
lrec-main.48
Poster
2403.17545
[ "https://github.com/riken-grp/gazevqa" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.lrec-main.49.bib
https://aclanthology.org/2024.lrec-main.49/
@inproceedings{fung-etal-2024-agenda, title = "Agenda-Driven Question Generation: A Case Study in the Courtroom Domain", author = "Fung, Yi and Kumar, Anoop and Galstyan, Aram and Ji, Heng and Natarajan, Prem", editor = "Calzolari, Nicoletta and Kan, Min-Yen and Hoste, Veronique and Lenci, Alessandro and Sakti, Sakriani and Xue, Nianwen", booktitle = "Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)", month = may, year = "2024", address = "Torino, Italia", publisher = "ELRA and ICCL", url = "https://aclanthology.org/2024.lrec-main.49", pages = "572--583", abstract = "This paper introduces a novel problem of automated question generation for courtroom examinations, CourtQG. While question generation has been studied in domains such as educational testing and product description, CourtQG poses several unique challenges owing to its non-cooperative and agenda-driven nature. Specifically, not only the generated questions need to be relevant to the case and underlying context, they also have to achieve certain objectives such as challenging the opponent{'}s arguments and/or revealing potential inconsistencies in their answers. We propose to leverage large language models (LLM) for CourtQG by fine-tuning them on two auxiliary tasks, agenda explanation (i.e., uncovering the underlying intents) and question type prediction. We additionally propose cold-start generation of questions from background documents without relying on examination history. We construct a dataset to evaluate our proposed method and show that it generates better questions according to standard metrics when compared to several baselines.", }
This paper introduces a novel problem of automated question generation for courtroom examinations, CourtQG. While question generation has been studied in domains such as educational testing and product description, CourtQG poses several unique challenges owing to its non-cooperative and agenda-driven nature. Specifically, not only the generated questions need to be relevant to the case and underlying context, they also have to achieve certain objectives such as challenging the opponent{'}s arguments and/or revealing potential inconsistencies in their answers. We propose to leverage large language models (LLM) for CourtQG by fine-tuning them on two auxiliary tasks, agenda explanation (i.e., uncovering the underlying intents) and question type prediction. We additionally propose cold-start generation of questions from background documents without relying on examination history. We construct a dataset to evaluate our proposed method and show that it generates better questions according to standard metrics when compared to several baselines.
[ "Fung, Yi", "Kumar, Anoop", "Galstyan, Aram", "Ji, Heng", "Natarajan, Prem" ]
Agenda-Driven Question Generation: A Case Study in the Courtroom Domain
lrec-main.49
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.lrec-main.50.bib
https://aclanthology.org/2024.lrec-main.50/
@inproceedings{zhao-penn-2024-generative, title = "A Generative Model for {L}ambek Categorial Sequents", author = "Zhao, Jinman and Penn, Gerald", editor = "Calzolari, Nicoletta and Kan, Min-Yen and Hoste, Veronique and Lenci, Alessandro and Sakti, Sakriani and Xue, Nianwen", booktitle = "Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)", month = may, year = "2024", address = "Torino, Italia", publisher = "ELRA and ICCL", url = "https://aclanthology.org/2024.lrec-main.50", pages = "584--593", abstract = "In this work, we introduce a generative model, PLC+, for generating Lambek Categorial Grammar(LCG) sequents. We also introduce a simple method to numerically estimate the model{'}s parameters from an annotated corpus. Then we compare our model with probabilistic context-free grammars (PCFGs) and show that PLC+ simultaneously assigns a higher probability to a common corpus, and has greater coverage.", }
In this work, we introduce a generative model, PLC+, for generating Lambek Categorial Grammar(LCG) sequents. We also introduce a simple method to numerically estimate the model{'}s parameters from an annotated corpus. Then we compare our model with probabilistic context-free grammars (PCFGs) and show that PLC+ simultaneously assigns a higher probability to a common corpus, and has greater coverage.
[ "Zhao, Jinman", "Penn, Gerald" ]
A Generative Model for Lambek Categorial Sequents
lrec-main.50
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.lrec-main.51.bib
https://aclanthology.org/2024.lrec-main.51/
@inproceedings{buzato-cunha-2024-agent, title = "Agent-based Modeling of Language Change in a Small-world Network", author = "Buzato, Dalmo and Cunha, Evandro", editor = "Calzolari, Nicoletta and Kan, Min-Yen and Hoste, Veronique and Lenci, Alessandro and Sakti, Sakriani and Xue, Nianwen", booktitle = "Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)", month = may, year = "2024", address = "Torino, Italia", publisher = "ELRA and ICCL", url = "https://aclanthology.org/2024.lrec-main.51", pages = "594--599", abstract = "Language change has been the subject of numerous studies in linguistics. However, due to the dynamic and complex nature of this phenomenon, and to the difficulty of obtaining extensive real data of language in use, some of its aspects remain obscure. In recent years, nonetheless, research has used computational modeling to simulate features related to variation, change, propagation, and evolution of languages in speech communities, finding compelling results. In this article, agent-based modeling and simulation is used to study language change. Drawing on previous studies, a speech community was modeled using Zachary{'}s karate club network, a well-established small-world network model in the field of complex systems. Idiolects were assigned through numerical values for each agent. The results demonstrate that the centrality of each agent in the network, interpreted as social prestige, appears to be a factor influencing change. Additionally, the nature of idiolects also seems to impact the spread of linguistic variants in the language change process. These findings complement the theoretical understanding of the language change phenomenon with new simulation data and provide new avenues for research.", }
Language change has been the subject of numerous studies in linguistics. However, due to the dynamic and complex nature of this phenomenon, and to the difficulty of obtaining extensive real data of language in use, some of its aspects remain obscure. In recent years, nonetheless, research has used computational modeling to simulate features related to variation, change, propagation, and evolution of languages in speech communities, finding compelling results. In this article, agent-based modeling and simulation is used to study language change. Drawing on previous studies, a speech community was modeled using Zachary{'}s karate club network, a well-established small-world network model in the field of complex systems. Idiolects were assigned through numerical values for each agent. The results demonstrate that the centrality of each agent in the network, interpreted as social prestige, appears to be a factor influencing change. Additionally, the nature of idiolects also seems to impact the spread of linguistic variants in the language change process. These findings complement the theoretical understanding of the language change phenomenon with new simulation data and provide new avenues for research.
[ "Buzato, Dalmo", "Cunha, Ev", "ro" ]
Agent-based Modeling of Language Change in a Small-world Network
lrec-main.51
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.lrec-main.52.bib
https://aclanthology.org/2024.lrec-main.52/
@inproceedings{millour-etal-2024-agettivu, title = "Agettivu, Aggitivu o Aghjettivu? {POS} Tagging {C}orsican Dialects", author = "Millour, Alice and Brasile, Lorenza and Ghia, Alberto and Kevers, Laurent", editor = "Calzolari, Nicoletta and Kan, Min-Yen and Hoste, Veronique and Lenci, Alessandro and Sakti, Sakriani and Xue, Nianwen", booktitle = "Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)", month = may, year = "2024", address = "Torino, Italia", publisher = "ELRA and ICCL", url = "https://aclanthology.org/2024.lrec-main.52", pages = "600--608", abstract = "In this paper we present a series of experiments towards POS tagging Corsican, a less-resourced language spoken in Corsica and linguistically related to Italian. The first contribution is Corsican-POS, the first gold standard POS-tagged corpus for Corsica, composed of 500 sentences manually annotated with the Universal POS tagset. Our second contribution is a set of experiments and evaluation of POS tagging models which starts with a baseline model for Italian and is aimed at finding the best training configuration, namely in terms of the size and combination strategy of the existing raw and annotated resources. These experiments result in (i) the first POS tagger for Corsican, reaching an accuracy of 93.38{\%}, (ii) a quantification of the gain provided by the use of each available resource. We find that the optimal configuration uses Italian word embeddings further specialized with Corsican embeddings and trained on the largest gold corpus for Corsican available so far.", }
In this paper we present a series of experiments towards POS tagging Corsican, a less-resourced language spoken in Corsica and linguistically related to Italian. The first contribution is Corsican-POS, the first gold standard POS-tagged corpus for Corsica, composed of 500 sentences manually annotated with the Universal POS tagset. Our second contribution is a set of experiments and evaluation of POS tagging models which starts with a baseline model for Italian and is aimed at finding the best training configuration, namely in terms of the size and combination strategy of the existing raw and annotated resources. These experiments result in (i) the first POS tagger for Corsican, reaching an accuracy of 93.38{\%}, (ii) a quantification of the gain provided by the use of each available resource. We find that the optimal configuration uses Italian word embeddings further specialized with Corsican embeddings and trained on the largest gold corpus for Corsican available so far.
[ "Millour, Alice", "Brasile, Lorenza", "Ghia, Alberto", "Kevers, Laurent" ]
Agettivu, Aggitivu o Aghjettivu? POS Tagging Corsican Dialects
lrec-main.52
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.lrec-main.53.bib
https://aclanthology.org/2024.lrec-main.53/
@inproceedings{yin-etal-2024-aggregation, title = "Aggregation of Reasoning: A Hierarchical Framework for Enhancing Answer Selection in Large Language Models", author = "Yin, Zhangyue and Sun, Qiushi and Guo, Qipeng and Zeng, Zhiyuan and Li, Xiaonan and Sun, Tianxiang and Chang, Cheng and Cheng, Qinyuan and Wang, Ding and Mou, Xiaofeng and Qiu, Xipeng and Huang, Xuanjing", editor = "Calzolari, Nicoletta and Kan, Min-Yen and Hoste, Veronique and Lenci, Alessandro and Sakti, Sakriani and Xue, Nianwen", booktitle = "Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)", month = may, year = "2024", address = "Torino, Italia", publisher = "ELRA and ICCL", url = "https://aclanthology.org/2024.lrec-main.53", pages = "609--625", abstract = "Recent advancements in Chain-of-Thought prompting have facilitated significant breakthroughs for Large Language Models (LLMs) in complex reasoning tasks. Current research enhances the reasoning performance of LLMs by sampling multiple reasoning chains and ensembling based on the answer frequency. However, this approach fails in scenarios where the correct answers are in the minority. We identify this as a primary factor constraining the reasoning capabilities of LLMs, a limitation that cannot be resolved solely based on the predicted answers. To address this shortcoming, we introduce a hierarchical reasoning aggregation framework AoR (Aggregation of Reasoning), which selects answers based on the evaluation of reasoning chains. Additionally, AoR incorporates dynamic sampling, adjusting the number of reasoning chains in accordance with the complexity of the task. Experimental results on a series of complex reasoning tasks show that AoR outperforms prominent ensemble methods. Further analysis reveals that AoR not only adapts various LLMs but also achieves a superior performance ceiling when compared to current methods.", }
Recent advancements in Chain-of-Thought prompting have facilitated significant breakthroughs for Large Language Models (LLMs) in complex reasoning tasks. Current research enhances the reasoning performance of LLMs by sampling multiple reasoning chains and ensembling based on the answer frequency. However, this approach fails in scenarios where the correct answers are in the minority. We identify this as a primary factor constraining the reasoning capabilities of LLMs, a limitation that cannot be resolved solely based on the predicted answers. To address this shortcoming, we introduce a hierarchical reasoning aggregation framework AoR (Aggregation of Reasoning), which selects answers based on the evaluation of reasoning chains. Additionally, AoR incorporates dynamic sampling, adjusting the number of reasoning chains in accordance with the complexity of the task. Experimental results on a series of complex reasoning tasks show that AoR outperforms prominent ensemble methods. Further analysis reveals that AoR not only adapts various LLMs but also achieves a superior performance ceiling when compared to current methods.
[ "Yin, Zhangyue", "Sun, Qiushi", "Guo, Qipeng", "Zeng, Zhiyuan", "Li, Xiaonan", "Sun, Tianxiang", "Chang, Cheng", "Cheng, Qinyuan", "Wang, Ding", "Mou, Xiaofeng", "Qiu, Xipeng", "Huang, Xuanjing" ]
Aggregation of Reasoning: A Hierarchical Framework for Enhancing Answer Selection in Large Language Models
lrec-main.53
Poster
2405.12939
[ "https://github.com/yinzhangyue/AoR" ]
https://huggingface.co./papers/2405.12939
1
1
0
12
1
[]
[]
[]
https://aclanthology.org/2024.lrec-main.54.bib
https://aclanthology.org/2024.lrec-main.54/
@inproceedings{wang-etal-2024-hierarchical, title = "A Hierarchical Sequence-to-Set Model with Coverage Mechanism for Aspect Category Sentiment Analysis", author = "Wang, Siyu and Jiang, Jianhui and Dai, Shengran and Qiu, Jiangtao", editor = "Calzolari, Nicoletta and Kan, Min-Yen and Hoste, Veronique and Lenci, Alessandro and Sakti, Sakriani and Xue, Nianwen", booktitle = "Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)", month = may, year = "2024", address = "Torino, Italia", publisher = "ELRA and ICCL", url = "https://aclanthology.org/2024.lrec-main.54", pages = "626--635", abstract = "Aspect category sentiment analysis (ACSA) aims to simultaneously detect aspect categories and their corresponding sentiment polarities (category-sentiment pairs). Some recent studies have used pre-trained generative models to complete ACSA and achieved good results. However, for ACSA, generative models still face three challenges. First, addressing the missing predictions in ACSA is crucial, which involves accurately predicting all category-sentiment pairs within a sentence. Second, category-sentiment pairs are inherently a disordered set. Consequently, the model incurs a penalty even when its predictions are correct, but the predicted order is inconsistent with the ground truths. Third, different aspect categories should focus on relevant sentiment words, and the polarity of the aspect category should be the aggregation of the polarities of these sentiment words. This paper proposes a hierarchical generative model with a coverage mechanism using sequence-to-set learning to tackle all three challenges simultaneously. Our model{'}s superior performance is demonstrated through extensive experiments conducted on several datasets.", }
Aspect category sentiment analysis (ACSA) aims to simultaneously detect aspect categories and their corresponding sentiment polarities (category-sentiment pairs). Some recent studies have used pre-trained generative models to complete ACSA and achieved good results. However, for ACSA, generative models still face three challenges. First, addressing the missing predictions in ACSA is crucial, which involves accurately predicting all category-sentiment pairs within a sentence. Second, category-sentiment pairs are inherently a disordered set. Consequently, the model incurs a penalty even when its predictions are correct, but the predicted order is inconsistent with the ground truths. Third, different aspect categories should focus on relevant sentiment words, and the polarity of the aspect category should be the aggregation of the polarities of these sentiment words. This paper proposes a hierarchical generative model with a coverage mechanism using sequence-to-set learning to tackle all three challenges simultaneously. Our model{'}s superior performance is demonstrated through extensive experiments conducted on several datasets.
[ "Wang, Siyu", "Jiang, Jianhui", "Dai, Shengran", "Qiu, Jiangtao" ]
A Hierarchical Sequence-to-Set Model with Coverage Mechanism for Aspect Category Sentiment Analysis
lrec-main.54
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.lrec-main.55.bib
https://aclanthology.org/2024.lrec-main.55/
@inproceedings{niu-etal-2024-hong, title = "A {H}ong {K}ong {S}ign {L}anguage Corpus Collected from Sign-interpreted {TV} News", author = "Niu, Zhe and Zuo, Ronglai and Mak, Brian and Wei, Fangyun", editor = "Calzolari, Nicoletta and Kan, Min-Yen and Hoste, Veronique and Lenci, Alessandro and Sakti, Sakriani and Xue, Nianwen", booktitle = "Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)", month = may, year = "2024", address = "Torino, Italia", publisher = "ELRA and ICCL", url = "https://aclanthology.org/2024.lrec-main.55", pages = "636--646", abstract = "This paper introduces TVB-HKSL-News, a new Hong Kong sign language (HKSL) dataset collected from a TV news program over a period of 7 months. The dataset is collected to enrich resources for HKSL and support research in large-vocabulary continuous sign language recognition (SLR) and translation (SLT). It consists of 16.07 hours of sign videos of two signers with a vocabulary of 6,515 glosses (for SLR) and 2,850 Chinese characters or 18K Chinese words (for SLT). One signer has 11.66 hours of sign videos and the other has 4.41 hours. One objective in building the dataset is to support the investigation of how well large-vocabulary continuous sign language recognition/translation can be done for a single signer given a (relatively) large amount of his/her training data, which could potentially lead to the development of new modeling methods. Besides, most parts of the data collection pipeline are automated with little human intervention; we believe that our collection method can be scaled up to collect more sign language data easily for SLT in the future for any sign languages if such sign-interpreted videos are available. We also run a SOTA SLR/SLT model on the dataset and get a baseline SLR word error rate of 34.08{\%} and a baseline SLT BLEU-4 score of 23.58 for benchmarking future research on the dataset.", }
This paper introduces TVB-HKSL-News, a new Hong Kong sign language (HKSL) dataset collected from a TV news program over a period of 7 months. The dataset is collected to enrich resources for HKSL and support research in large-vocabulary continuous sign language recognition (SLR) and translation (SLT). It consists of 16.07 hours of sign videos of two signers with a vocabulary of 6,515 glosses (for SLR) and 2,850 Chinese characters or 18K Chinese words (for SLT). One signer has 11.66 hours of sign videos and the other has 4.41 hours. One objective in building the dataset is to support the investigation of how well large-vocabulary continuous sign language recognition/translation can be done for a single signer given a (relatively) large amount of his/her training data, which could potentially lead to the development of new modeling methods. Besides, most parts of the data collection pipeline are automated with little human intervention; we believe that our collection method can be scaled up to collect more sign language data easily for SLT in the future for any sign languages if such sign-interpreted videos are available. We also run a SOTA SLR/SLT model on the dataset and get a baseline SLR word error rate of 34.08{\%} and a baseline SLT BLEU-4 score of 23.58 for benchmarking future research on the dataset.
[ "Niu, Zhe", "Zuo, Ronglai", "Mak, Brian", "Wei, Fangyun" ]
A Hong Kong Sign Language Corpus Collected from Sign-interpreted TV News
lrec-main.55
Poster
2405.00980
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.lrec-main.56.bib
https://aclanthology.org/2024.lrec-main.56/
@inproceedings{negi-etal-2024-hybrid, title = "A Hybrid Approach to Aspect Based Sentiment Analysis Using Transfer Learning", author = "Negi, Gaurav and Sarkar, Rajdeep and Zayed, Omnia and Buitelaar, Paul", editor = "Calzolari, Nicoletta and Kan, Min-Yen and Hoste, Veronique and Lenci, Alessandro and Sakti, Sakriani and Xue, Nianwen", booktitle = "Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)", month = may, year = "2024", address = "Torino, Italia", publisher = "ELRA and ICCL", url = "https://aclanthology.org/2024.lrec-main.56", pages = "647--658", abstract = "Aspect-Based Sentiment Analysis ( ABSA) aims to identify terms or multiword expressions (MWEs) on which sentiments are expressed and the sentiment polarities associated with them. The development of supervised models has been at the forefront of research in this area. However, training these models requires the availability of manually annotated datasets which is both expensive and time-consuming. Furthermore, the available annotated datasets are tailored to a specific domain, language, and text type. In this work, we address this notable challenge in current state-of-the-art ABSA research. We propose a hybrid approach for Aspect Based Sentiment Analysis using transfer learning. The approach focuses on generating weakly-supervised annotations by exploiting the strengths of both large language models (LLM) and traditional syntactic dependencies. We utilise syntactic dependency structures of sentences to complement the annotations generated by LLMs, as they may overlook domain-specific aspect terms. Extensive experimentation on multiple datasets is performed to demonstrate the efficacy of our hybrid method for the tasks of aspect term extraction and aspect sentiment classification.", }
Aspect-Based Sentiment Analysis ( ABSA) aims to identify terms or multiword expressions (MWEs) on which sentiments are expressed and the sentiment polarities associated with them. The development of supervised models has been at the forefront of research in this area. However, training these models requires the availability of manually annotated datasets which is both expensive and time-consuming. Furthermore, the available annotated datasets are tailored to a specific domain, language, and text type. In this work, we address this notable challenge in current state-of-the-art ABSA research. We propose a hybrid approach for Aspect Based Sentiment Analysis using transfer learning. The approach focuses on generating weakly-supervised annotations by exploiting the strengths of both large language models (LLM) and traditional syntactic dependencies. We utilise syntactic dependency structures of sentences to complement the annotations generated by LLMs, as they may overlook domain-specific aspect terms. Extensive experimentation on multiple datasets is performed to demonstrate the efficacy of our hybrid method for the tasks of aspect term extraction and aspect sentiment classification.
[ "Negi, Gaurav", "Sarkar, Rajdeep", "Zayed, Omnia", "Buitelaar, Paul" ]
A Hybrid Approach to Aspect Based Sentiment Analysis Using Transfer Learning
lrec-main.56
Poster
2403.17254
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.lrec-main.57.bib
https://aclanthology.org/2024.lrec-main.57/
@inproceedings{urakawa-etal-2024-japanese, title = "A {J}apanese News Simplification Corpus with Faithfulness", author = "Urakawa, Toru and Taguchi, Yuya and Niitsuma, Takuro and Tamori, Hideaki", editor = "Calzolari, Nicoletta and Kan, Min-Yen and Hoste, Veronique and Lenci, Alessandro and Sakti, Sakriani and Xue, Nianwen", booktitle = "Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)", month = may, year = "2024", address = "Torino, Italia", publisher = "ELRA and ICCL", url = "https://aclanthology.org/2024.lrec-main.57", pages = "659--665", abstract = "Text Simplification enhances the readability of texts for specific audiences. However, automated models may introduce unwanted content or omit essential details, necessitating a focus on maintaining faithfulness to the original input. Furthermore, existing simplified corpora contain instances of low faithfulness. Motivated by this issue, we present a new Japanese simplification corpus designed to prioritize faithfulness. Our collection comprises 7,075 paired sentences simplified from newspaper articles. This process involved collaboration with language education experts who followed guidelines balancing readability and faithfulness. Through corpus analysis, we confirmed that our dataset preserves the content of the original text, including personal names, dates, and city names. Manual evaluation showed that our corpus robustly maintains faithfulness to the original text, surpassing other existing corpora. Furthermore, evaluation by non-native readers confirmed its readability to the target audience. Through the experiment of fine-tuning and in-context learning, we demonstrated that our corpus enhances faithful sentence simplification.", }
Text Simplification enhances the readability of texts for specific audiences. However, automated models may introduce unwanted content or omit essential details, necessitating a focus on maintaining faithfulness to the original input. Furthermore, existing simplified corpora contain instances of low faithfulness. Motivated by this issue, we present a new Japanese simplification corpus designed to prioritize faithfulness. Our collection comprises 7,075 paired sentences simplified from newspaper articles. This process involved collaboration with language education experts who followed guidelines balancing readability and faithfulness. Through corpus analysis, we confirmed that our dataset preserves the content of the original text, including personal names, dates, and city names. Manual evaluation showed that our corpus robustly maintains faithfulness to the original text, surpassing other existing corpora. Furthermore, evaluation by non-native readers confirmed its readability to the target audience. Through the experiment of fine-tuning and in-context learning, we demonstrated that our corpus enhances faithful sentence simplification.
[ "Urakawa, Toru", "Taguchi, Yuya", "Niitsuma, Takuro", "Tamori, Hideaki" ]
A Japanese News Simplification Corpus with Faithfulness
lrec-main.57
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.lrec-main.58.bib
https://aclanthology.org/2024.lrec-main.58/
@inproceedings{li-etal-2024-knowledge, title = "A Knowledge Plug-and-Play Test Bed for Open-domain Dialogue Generation", author = "Li, Xiangci and Song, Linfeng and Jin, Lifeng and Mi, Haitao and Ouyang, Jessica and Yu, Dong", editor = "Calzolari, Nicoletta and Kan, Min-Yen and Hoste, Veronique and Lenci, Alessandro and Sakti, Sakriani and Xue, Nianwen", booktitle = "Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)", month = may, year = "2024", address = "Torino, Italia", publisher = "ELRA and ICCL", url = "https://aclanthology.org/2024.lrec-main.58", pages = "666--676", abstract = "Knowledge-based, open-domain dialogue generation aims to build chit-chat systems that talk to humans using mined support knowledge. Many types and sources of knowledge have previously been shown to be useful as support knowledge. Even in the era of large language models, response generation grounded in knowledge retrieved from additional up-to-date sources remains a practically important approach. While prior work using single-source knowledge has shown a clear positive correlation between the performances of knowledge selection and response generation, there are no existing multi-source datasets for evaluating support knowledge retrieval. Further, prior work has assumed that the knowledge sources available at test time are the same as during training. This unrealistic assumption unnecessarily handicaps models, as new knowledge sources can become available after a model is trained. In this paper, we present a high-quality benchmark named multi-source Wizard of Wikipedia (Ms.WoW) for evaluating multi-source dialogue knowledge selection and response generation. Unlike existing datasets, it contains clean support knowledge, grounded at the utterance level and partitioned into multiple knowledge sources. We further propose a new challenge, dialogue knowledge plug-and-play, which aims to test an already trained dialogue model on using new support knowledge from previously unseen sources in a zero-shot fashion.", }
Knowledge-based, open-domain dialogue generation aims to build chit-chat systems that talk to humans using mined support knowledge. Many types and sources of knowledge have previously been shown to be useful as support knowledge. Even in the era of large language models, response generation grounded in knowledge retrieved from additional up-to-date sources remains a practically important approach. While prior work using single-source knowledge has shown a clear positive correlation between the performances of knowledge selection and response generation, there are no existing multi-source datasets for evaluating support knowledge retrieval. Further, prior work has assumed that the knowledge sources available at test time are the same as during training. This unrealistic assumption unnecessarily handicaps models, as new knowledge sources can become available after a model is trained. In this paper, we present a high-quality benchmark named multi-source Wizard of Wikipedia (Ms.WoW) for evaluating multi-source dialogue knowledge selection and response generation. Unlike existing datasets, it contains clean support knowledge, grounded at the utterance level and partitioned into multiple knowledge sources. We further propose a new challenge, dialogue knowledge plug-and-play, which aims to test an already trained dialogue model on using new support knowledge from previously unseen sources in a zero-shot fashion.
[ "Li, Xiangci", "Song, Linfeng", "Jin, Lifeng", "Mi, Haitao", "Ouyang, Jessica", "Yu, Dong" ]
A Knowledge Plug-and-Play Test Bed for Open-domain Dialogue Generation
lrec-main.58
Poster
2403.03496
[ "https://github.com/jacklxc/ms.wow" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.lrec-main.59.bib
https://aclanthology.org/2024.lrec-main.59/
@inproceedings{haider-2024-large, title = "A Large Annotated Reference Corpus of {N}ew {H}igh {G}erman Poetry", author = "Haider, Thomas", editor = "Calzolari, Nicoletta and Kan, Min-Yen and Hoste, Veronique and Lenci, Alessandro and Sakti, Sakriani and Xue, Nianwen", booktitle = "Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)", month = may, year = "2024", address = "Torino, Italia", publisher = "ELRA and ICCL", url = "https://aclanthology.org/2024.lrec-main.59", pages = "677--683", abstract = "This paper introduces a large annotated corpus of public domain German poetry, covering the time period from 1600 to the 1920s with 65k poems. We describe how the corpus was compiled, how it was cleaned (including duplicate detection), and how it looks now in terms of size, format, temporal distribution, and automatic annotation. Besides metadata, the corpus contains reliable annotation of tokens, syllables, part-of-speech, and meter and verse measure. Finally, we give some statistics on the annotation and an overview of other poetry corpora.", }
This paper introduces a large annotated corpus of public domain German poetry, covering the time period from 1600 to the 1920s with 65k poems. We describe how the corpus was compiled, how it was cleaned (including duplicate detection), and how it looks now in terms of size, format, temporal distribution, and automatic annotation. Besides metadata, the corpus contains reliable annotation of tokens, syllables, part-of-speech, and meter and verse measure. Finally, we give some statistics on the annotation and an overview of other poetry corpora.
[ "Haider, Thomas" ]
A Large Annotated Reference Corpus of New High German Poetry
lrec-main.59
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.lrec-main.60.bib
https://aclanthology.org/2024.lrec-main.60/
@inproceedings{liu-etal-2024-lifelong, title = "A Lifelong Multilingual Multi-granularity Semantic Alignment Approach via Maximum Co-occurrence Probability", author = "Liu, Xin and Sun, Hongwei and Dai, Shaojie and Lv, Bo and Pan, Youcheng and Wang, Hui and Yu, Yue", editor = "Calzolari, Nicoletta and Kan, Min-Yen and Hoste, Veronique and Lenci, Alessandro and Sakti, Sakriani and Xue, Nianwen", booktitle = "Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)", month = may, year = "2024", address = "Torino, Italia", publisher = "ELRA and ICCL", url = "https://aclanthology.org/2024.lrec-main.60", pages = "684--694", abstract = "Cross-lingual pre-training methods mask and predict tokens in multilingual text to generalize diverse multilingual information. However, due to the lack of sufficient aligned multilingual resources in the pre-training process, these methods may not fully explore the multilingual correlation of masked tokens, resulting in the limitation of multilingual information interaction. In this paper, we propose a lifelong multilingual multi-granularity semantic alignment approach, which continuously extracts massive aligned linguistic units from noisy data via a maximum co-occurrence probability algorithm. Then, the approach releases a version of the multilingual multi-granularity semantic alignment resource, supporting seven languages, namely English, Czech, German, Russian, Romanian, Hindi and Turkish. Finally, we propose how to use this resource to improve the translation performance on WMT14 18 benchmarks in twelve directions. Experimental results show an average of 0.3 1.1 BLEU improvements in all translation benchmarks. The analysis and discussion also demonstrate the superiority and potential of the proposed approach. The resource used in this work will be publicly available.", }
Cross-lingual pre-training methods mask and predict tokens in multilingual text to generalize diverse multilingual information. However, due to the lack of sufficient aligned multilingual resources in the pre-training process, these methods may not fully explore the multilingual correlation of masked tokens, resulting in the limitation of multilingual information interaction. In this paper, we propose a lifelong multilingual multi-granularity semantic alignment approach, which continuously extracts massive aligned linguistic units from noisy data via a maximum co-occurrence probability algorithm. Then, the approach releases a version of the multilingual multi-granularity semantic alignment resource, supporting seven languages, namely English, Czech, German, Russian, Romanian, Hindi and Turkish. Finally, we propose how to use this resource to improve the translation performance on WMT14 18 benchmarks in twelve directions. Experimental results show an average of 0.3 1.1 BLEU improvements in all translation benchmarks. The analysis and discussion also demonstrate the superiority and potential of the proposed approach. The resource used in this work will be publicly available.
[ "Liu, Xin", "Sun, Hongwei", "Dai, Shaojie", "Lv, Bo", "Pan, Youcheng", "Wang, Hui", "Yu, Yue" ]
A Lifelong Multilingual Multi-granularity Semantic Alignment Approach via Maximum Co-occurrence Probability
lrec-main.60
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.lrec-main.61.bib
https://aclanthology.org/2024.lrec-main.61/
@inproceedings{dobranic-etal-2024-lightweight, title = "A Lightweight Approach to a Giga-Corpus of Historical Periodicals: The Story of a {S}lovenian Historical Newspaper Collection", author = "Dobrani{\'c}, Filip and Evkoski, Bojan and Ljube{\v{s}}i{\'c}, Nikola", editor = "Calzolari, Nicoletta and Kan, Min-Yen and Hoste, Veronique and Lenci, Alessandro and Sakti, Sakriani and Xue, Nianwen", booktitle = "Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)", month = may, year = "2024", address = "Torino, Italia", publisher = "ELRA and ICCL", url = "https://aclanthology.org/2024.lrec-main.61", pages = "695--703", abstract = "Preparing historical newspaper collections is a complicated endeavour, consisting of multiple steps that have to be carefully adapted to the specific content in question, including imaging, layout prediction, optical character recognition, and linguistic annotation. To address the high costs associated with the process, we present a lightweight approach to producing high-quality corpora and apply it to a massive collection of Slovenian historical newspapers from the 18th, 19th and 20th century resulting in a billion-word giga-corpus. We start with noisy OCR-ed data produced by different technologies in varying periods by the National and University Library of Slovenia. To address the inherent variability in the quality of textual data, a challenge commonly encountered in digital libraries globally, we perform a targeted post-digitisation correction procedure, coupled with a robust curation mechanism for noisy texts via language model inference. Subsequently, we subject the corrected and filtered output to comprehensive linguistic annotation, enriching the corpus with part-of-speech tags, lemmas, and named entity labels. Finally, we perform an analysis through topic modeling at the noun lemma level, along with a frequency analysis of the named entities, to confirm the viability of our corpus preparation method.", }
Preparing historical newspaper collections is a complicated endeavour, consisting of multiple steps that have to be carefully adapted to the specific content in question, including imaging, layout prediction, optical character recognition, and linguistic annotation. To address the high costs associated with the process, we present a lightweight approach to producing high-quality corpora and apply it to a massive collection of Slovenian historical newspapers from the 18th, 19th and 20th century resulting in a billion-word giga-corpus. We start with noisy OCR-ed data produced by different technologies in varying periods by the National and University Library of Slovenia. To address the inherent variability in the quality of textual data, a challenge commonly encountered in digital libraries globally, we perform a targeted post-digitisation correction procedure, coupled with a robust curation mechanism for noisy texts via language model inference. Subsequently, we subject the corrected and filtered output to comprehensive linguistic annotation, enriching the corpus with part-of-speech tags, lemmas, and named entity labels. Finally, we perform an analysis through topic modeling at the noun lemma level, along with a frequency analysis of the named entities, to confirm the viability of our corpus preparation method.
[ "Dobrani{\\'c}, Filip", "Evkoski, Bojan", "Ljube{\\v{s}}i{\\'c}, Nikola" ]
A Lightweight Approach to a Giga-Corpus of Historical Periodicals: The Story of a Slovenian Historical Newspaper Collection
lrec-main.61
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.lrec-main.62.bib
https://aclanthology.org/2024.lrec-main.62/
@inproceedings{jorgensen-kasen-2024-aligning, title = "Aligning the {N}orwegian {UD} Treebank with Entity and Coreference Information", author = "J{\o}rgensen, Tollef Emil and K{\aa}sen, Andre", editor = "Calzolari, Nicoletta and Kan, Min-Yen and Hoste, Veronique and Lenci, Alessandro and Sakti, Sakriani and Xue, Nianwen", booktitle = "Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)", month = may, year = "2024", address = "Torino, Italia", publisher = "ELRA and ICCL", url = "https://aclanthology.org/2024.lrec-main.62", pages = "704--710", abstract = "This paper presents a merged collection of entity and coreference annotated data grounded in the Universal Dependencies (UD) treebanks for the two written forms of Norwegian: Bokm{\aa}l and Nynorsk. The aligned and converted corpora are the Norwegian Named Entities (NorNE) and Norwegian Anaphora Resolution Corpus (NARC). While NorNE is aligned with an older version of the treebank, NARC is misaligned and requires extensive transformation from the original annotations to the UD structure and CoNLL-U format. Here, we demonstrate the conversion and alignment processes, along with an analysis of discovered issues and errors in the data, some of which include data split overlaps in the original treebank. These procedures and the developed system may prove helpful for future work on processing and aligning data from universal dependencies. The merged corpora comprise the first Norwegian UD treebank enriched with named entities and coreference information, supporting the standardized format for the CorefUD initiative.", }
This paper presents a merged collection of entity and coreference annotated data grounded in the Universal Dependencies (UD) treebanks for the two written forms of Norwegian: Bokm{\aa}l and Nynorsk. The aligned and converted corpora are the Norwegian Named Entities (NorNE) and Norwegian Anaphora Resolution Corpus (NARC). While NorNE is aligned with an older version of the treebank, NARC is misaligned and requires extensive transformation from the original annotations to the UD structure and CoNLL-U format. Here, we demonstrate the conversion and alignment processes, along with an analysis of discovered issues and errors in the data, some of which include data split overlaps in the original treebank. These procedures and the developed system may prove helpful for future work on processing and aligning data from universal dependencies. The merged corpora comprise the first Norwegian UD treebank enriched with named entities and coreference information, supporting the standardized format for the CorefUD initiative.
[ "J{\\o}rgensen, Tollef Emil", "K{\\aa}sen, Andre" ]
Aligning the Norwegian UD Treebank with Entity and Coreference Information
lrec-main.62
Poster
2305.13527
[ "https://github.com/tollefj/ud-narc" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.lrec-main.63.bib
https://aclanthology.org/2024.lrec-main.63/
@inproceedings{zhu-etal-2024-alignment, title = "Alignment before Awareness: Towards Visual Question Localized-Answering in Robotic Surgery via Optimal Transport and Answer Semantics", author = "Zhu, Zhihong and Zhang, Yunyan and Cheng, Xuxin and Huang, Zhiqi and Xu, Derong and Wu, Xian and Zheng, Yefeng", editor = "Calzolari, Nicoletta and Kan, Min-Yen and Hoste, Veronique and Lenci, Alessandro and Sakti, Sakriani and Xue, Nianwen", booktitle = "Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)", month = may, year = "2024", address = "Torino, Italia", publisher = "ELRA and ICCL", url = "https://aclanthology.org/2024.lrec-main.63", pages = "711--721", abstract = "The visual question localized-answering (VQLA) system has garnered increasing attention due to its potential as a knowledgeable assistant in surgical education. Apart from providing text-based answers, VQLA can also pinpoint the specific region of interest for better surgical scene understanding. Although recent Transformer-based models for VQLA have obtained promising results, they (1) conduct vanilla text-to-image cross attention, leading to unidirectional and coarse-grained alignment; (2) ignore exploiting the semantics of answers to further boost performance. In this paper, we propose a novel model termed OTAS, which first introduces optimal transport to achieve bidirectional and fine-grained alignment between images and questions, enabling more precise localization. Besides, OTAS incorporates a set of learnable candidate answer embeddings to query the probability of each answer class for a given image-question pair. Through Transformer attention, the candidate answer embeddings interact with the fused features of the image-question pair to make the answer decision. Extensive experiments on two widely-used benchmark datasets demonstrate the superiority of our model over state-of-the-art methods.", }
The visual question localized-answering (VQLA) system has garnered increasing attention due to its potential as a knowledgeable assistant in surgical education. Apart from providing text-based answers, VQLA can also pinpoint the specific region of interest for better surgical scene understanding. Although recent Transformer-based models for VQLA have obtained promising results, they (1) conduct vanilla text-to-image cross attention, leading to unidirectional and coarse-grained alignment; (2) ignore exploiting the semantics of answers to further boost performance. In this paper, we propose a novel model termed OTAS, which first introduces optimal transport to achieve bidirectional and fine-grained alignment between images and questions, enabling more precise localization. Besides, OTAS incorporates a set of learnable candidate answer embeddings to query the probability of each answer class for a given image-question pair. Through Transformer attention, the candidate answer embeddings interact with the fused features of the image-question pair to make the answer decision. Extensive experiments on two widely-used benchmark datasets demonstrate the superiority of our model over state-of-the-art methods.
[ "Zhu, Zhihong", "Zhang, Yunyan", "Cheng, Xuxin", "Huang, Zhiqi", "Xu, Derong", "Wu, Xian", "Zheng, Yefeng" ]
Alignment before Awareness: Towards Visual Question Localized-Answering in Robotic Surgery via Optimal Transport and Answer Semantics
lrec-main.63
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.lrec-main.64.bib
https://aclanthology.org/2024.lrec-main.64/
@inproceedings{jin-etal-2024-align, title = "Align-to-Distill: Trainable Attention Alignment for Knowledge Distillation in Neural Machine Translation", author = "Jin, Heegon and Son, Seonil and Park, Jemin and Kim, Youngseok and Noh, Hyungjong and Lee, Yeonsoo", editor = "Calzolari, Nicoletta and Kan, Min-Yen and Hoste, Veronique and Lenci, Alessandro and Sakti, Sakriani and Xue, Nianwen", booktitle = "Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)", month = may, year = "2024", address = "Torino, Italia", publisher = "ELRA and ICCL", url = "https://aclanthology.org/2024.lrec-main.64", pages = "722--732", abstract = "The advent of scalable deep models and large datasets has improved the performance of Neural Machine Translation (NMT). Knowledge Distillation (KD) enhances efficiency by transferring knowledge from a teacher model to a more compact student model. However, KD approaches to Transformer architecture often rely on heuristics, particularly when deciding which teacher layers to distill from. In this paper, we introduce the {``}Align-to-Distill{''} (A2D) strategy, designed to address the feature mapping problem by adaptively aligning student attention heads with their teacher counterparts during training. The Attention Alignment Module (AAM) in A2D performs a dense head-by-head comparison between student and teacher attention heads across layers, turning the combinatorial mapping heuristics into a learning problem. Our experiments show the efficacy of A2D, demonstrating gains of up to +3.61 and +0.63 BLEU points for WMT-2022 De→Dsb and WMT-2014 En→De, respectively, compared to Transformer baselines.The code and data are available at https://github.com/ncsoft/Align-to-Distill.", }
The advent of scalable deep models and large datasets has improved the performance of Neural Machine Translation (NMT). Knowledge Distillation (KD) enhances efficiency by transferring knowledge from a teacher model to a more compact student model. However, KD approaches to Transformer architecture often rely on heuristics, particularly when deciding which teacher layers to distill from. In this paper, we introduce the {``}Align-to-Distill{''} (A2D) strategy, designed to address the feature mapping problem by adaptively aligning student attention heads with their teacher counterparts during training. The Attention Alignment Module (AAM) in A2D performs a dense head-by-head comparison between student and teacher attention heads across layers, turning the combinatorial mapping heuristics into a learning problem. Our experiments show the efficacy of A2D, demonstrating gains of up to +3.61 and +0.63 BLEU points for WMT-2022 De→Dsb and WMT-2014 En→De, respectively, compared to Transformer baselines.The code and data are available at https://github.com/ncsoft/Align-to-Distill.
[ "Jin, Heegon", "Son, Seonil", "Park, Jemin", "Kim, Youngseok", "Noh, Hyungjong", "Lee, Yeonsoo" ]
Align-to-Distill: Trainable Attention Alignment for Knowledge Distillation in Neural Machine Translation
lrec-main.64
Poster
2403.01479
[ "https://github.com/ncsoft/Align-to-Distill" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.lrec-main.65.bib
https://aclanthology.org/2024.lrec-main.65/
@inproceedings{chen-etal-2024-linguistically, title = "A Linguistically-Informed Annotation Strategy for {K}orean Semantic Role Labeling", author = "Chen, Yige and Lim, KyungTae and Park, Jungyeul", editor = "Calzolari, Nicoletta and Kan, Min-Yen and Hoste, Veronique and Lenci, Alessandro and Sakti, Sakriani and Xue, Nianwen", booktitle = "Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)", month = may, year = "2024", address = "Torino, Italia", publisher = "ELRA and ICCL", url = "https://aclanthology.org/2024.lrec-main.65", pages = "733--738", abstract = "Semantic role labeling is an essential component of semantic and syntactic processing of natural languages, which reveals the predicate-argument structure of the language. Despite its importance, semantic role labeling for the Korean language has not been studied extensively. One notable issue is the lack of uniformity among data annotation strategies across different datasets, which often lack thorough rationales. In this study, we suggest an annotation strategy for Korean semantic role labeling that is in line with the previously proposed linguistic theories as well as the distinct properties of the Korean language. We further propose a simple yet viable conversion strategy from the Sejong verb dictionary to a CoNLL-style dataset for Korean semantic role labeling. Experiment results using a transformer-based sequence labeling model demonstrate the reliability and trainability of the converted dataset.", }
Semantic role labeling is an essential component of semantic and syntactic processing of natural languages, which reveals the predicate-argument structure of the language. Despite its importance, semantic role labeling for the Korean language has not been studied extensively. One notable issue is the lack of uniformity among data annotation strategies across different datasets, which often lack thorough rationales. In this study, we suggest an annotation strategy for Korean semantic role labeling that is in line with the previously proposed linguistic theories as well as the distinct properties of the Korean language. We further propose a simple yet viable conversion strategy from the Sejong verb dictionary to a CoNLL-style dataset for Korean semantic role labeling. Experiment results using a transformer-based sequence labeling model demonstrate the reliability and trainability of the converted dataset.
[ "Chen, Yige", "Lim, KyungTae", "Park, Jungyeul" ]
A Linguistically-Informed Annotation Strategy for Korean Semantic Role Labeling
lrec-main.65
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.lrec-main.66.bib
https://aclanthology.org/2024.lrec-main.66/
@inproceedings{duan-etal-2024-alleviating, title = "Alleviating Exposure Bias in Abstractive Summarization via Sequentially Generating and Revising", author = "Duan, Jiaxin and Lu, Fengyu and Liu, Junfei", editor = "Calzolari, Nicoletta and Kan, Min-Yen and Hoste, Veronique and Lenci, Alessandro and Sakti, Sakriani and Xue, Nianwen", booktitle = "Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)", month = may, year = "2024", address = "Torino, Italia", publisher = "ELRA and ICCL", url = "https://aclanthology.org/2024.lrec-main.66", pages = "739--750", abstract = "Abstractive summarization commonly suffers from exposure bias caused by supervised teacher-force learning, that a model predicts the next token conditioned on the accurate pre-context during training while on its preceding outputs at inference. Existing solutions bridge this gap through un- or semi-supervised holistic learning yet still leave the risk of error accumulation while generating a summary. In this paper, we attribute this problem to the limitation of unidirectional autoregressive text generation and introduce post-processing steps to alleviate it. Specifically, we reformat abstractive summarization to sequential generation and revision (SeGRe), i.e., a model in the revision phase re-inputs the generated summary and refines it by contrasting it with the source document. This provides the model additional opportunities to assess the flawed summary from a global view and thereby modify inappropriate expressions. Moreover, we train the SeGRe model with a regularized minimum-risk policy to ensure effective generation and revision. A lot of comparative experiments are implemented on two well-known datasets, exhibiting the new or matched state-of-the-art performance of SeGRe.", }
Abstractive summarization commonly suffers from exposure bias caused by supervised teacher-force learning, that a model predicts the next token conditioned on the accurate pre-context during training while on its preceding outputs at inference. Existing solutions bridge this gap through un- or semi-supervised holistic learning yet still leave the risk of error accumulation while generating a summary. In this paper, we attribute this problem to the limitation of unidirectional autoregressive text generation and introduce post-processing steps to alleviate it. Specifically, we reformat abstractive summarization to sequential generation and revision (SeGRe), i.e., a model in the revision phase re-inputs the generated summary and refines it by contrasting it with the source document. This provides the model additional opportunities to assess the flawed summary from a global view and thereby modify inappropriate expressions. Moreover, we train the SeGRe model with a regularized minimum-risk policy to ensure effective generation and revision. A lot of comparative experiments are implemented on two well-known datasets, exhibiting the new or matched state-of-the-art performance of SeGRe.
[ "Duan, Jiaxin", "Lu, Fengyu", "Liu, Junfei" ]
Alleviating Exposure Bias in Abstractive Summarization via Sequentially Generating and Revising
lrec-main.66
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.lrec-main.67.bib
https://aclanthology.org/2024.lrec-main.67/
@inproceedings{tahon-etal-2024-allies, title = "{ALLIES}: A Speech Corpus for Segmentation, Speaker Diarization, Speech Recognition and Speaker Change Detection", author = "Tahon, Marie and Larcher, Anthony and Lebourdais, Martin and Bougares, Fethi and Silnova, Anna and Gimeno, Pablo", editor = "Calzolari, Nicoletta and Kan, Min-Yen and Hoste, Veronique and Lenci, Alessandro and Sakti, Sakriani and Xue, Nianwen", booktitle = "Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)", month = may, year = "2024", address = "Torino, Italia", publisher = "ELRA and ICCL", url = "https://aclanthology.org/2024.lrec-main.67", pages = "751--758", abstract = "This paper presents ALLIES, a meta corpus which gathers and extends existing French corpora collected from radio and TV shows. The corpus contains 1048 audio files for about 500 hours of speech. Agglomeration of data is always a difficult issue, as the guidelines used to collect, annotate and transcribe speech are generally different from one corpus to another. ALLIES intends to homogenize and correct speaker labels among the different files by integrated human feedback within a speaker verification system. The main contribution of this article is the design of a protocol in order to evaluate properly speech segmentation (including music and overlap detection), speaker diarization, speech transcription and speaker change detection. As part of it, a test partition has been carefully manually 1) segmented and annotated according to speech, music, noise, speaker labels with specific guidelines for overlap speech, 2) orthographically transcribed. This article also provides as a second contribution baseline results for several speech processing tasks.", }
This paper presents ALLIES, a meta corpus which gathers and extends existing French corpora collected from radio and TV shows. The corpus contains 1048 audio files for about 500 hours of speech. Agglomeration of data is always a difficult issue, as the guidelines used to collect, annotate and transcribe speech are generally different from one corpus to another. ALLIES intends to homogenize and correct speaker labels among the different files by integrated human feedback within a speaker verification system. The main contribution of this article is the design of a protocol in order to evaluate properly speech segmentation (including music and overlap detection), speaker diarization, speech transcription and speaker change detection. As part of it, a test partition has been carefully manually 1) segmented and annotated according to speech, music, noise, speaker labels with specific guidelines for overlap speech, 2) orthographically transcribed. This article also provides as a second contribution baseline results for several speech processing tasks.
[ "Tahon, Marie", "Larcher, Anthony", "Lebourdais, Martin", "Bougares, Fethi", "Silnova, Anna", "Gimeno, Pablo" ]
ALLIES: A Speech Corpus for Segmentation, Speaker Diarization, Speech Recognition and Speaker Change Detection
lrec-main.67
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.lrec-main.68.bib
https://aclanthology.org/2024.lrec-main.68/
@inproceedings{yuan-etal-2024-logical, title = "A Logical Pattern Memory Pre-trained Model for Entailment Tree Generation", author = "Yuan, Li and Cai, Yi and Ren, Haopeng and Wang, Jiexin", editor = "Calzolari, Nicoletta and Kan, Min-Yen and Hoste, Veronique and Lenci, Alessandro and Sakti, Sakriani and Xue, Nianwen", booktitle = "Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)", month = may, year = "2024", address = "Torino, Italia", publisher = "ELRA and ICCL", url = "https://aclanthology.org/2024.lrec-main.68", pages = "759--772", abstract = "Generating coherent and credible explanations remains a significant challenge in the field of AI. In recent years, researchers have delved into the utilization of entailment trees to depict explanations, which exhibit a reasoning process of how a hypothesis is deduced from the supporting facts. However, existing models often overlook the importance of generating intermediate conclusions with logical consistency from the given facts, leading to inaccurate conclusions and undermining the overall credibility of entailment trees. To address this limitation, we propose the logical pattern memory pre-trained model (LMPM). LMPM incorporates an external memory structure to learn and store the latent representations of logical patterns, which aids in generating logically consistent conclusions. Furthermore, to mitigate the influence of logically irrelevant domain knowledge in the Wikipedia-based data, we introduce an entity abstraction approach to construct the dataset for pre-training LMPM. The experimental results highlight the effectiveness of our approach in improving the quality of entailment tree generation. By leveraging logical entailment patterns, our model produces more coherent and reasonable conclusions that closely align with the underlying premises.", }
Generating coherent and credible explanations remains a significant challenge in the field of AI. In recent years, researchers have delved into the utilization of entailment trees to depict explanations, which exhibit a reasoning process of how a hypothesis is deduced from the supporting facts. However, existing models often overlook the importance of generating intermediate conclusions with logical consistency from the given facts, leading to inaccurate conclusions and undermining the overall credibility of entailment trees. To address this limitation, we propose the logical pattern memory pre-trained model (LMPM). LMPM incorporates an external memory structure to learn and store the latent representations of logical patterns, which aids in generating logically consistent conclusions. Furthermore, to mitigate the influence of logically irrelevant domain knowledge in the Wikipedia-based data, we introduce an entity abstraction approach to construct the dataset for pre-training LMPM. The experimental results highlight the effectiveness of our approach in improving the quality of entailment tree generation. By leveraging logical entailment patterns, our model produces more coherent and reasonable conclusions that closely align with the underlying premises.
[ "Yuan, Li", "Cai, Yi", "Ren, Haopeng", "Wang, Jiexin" ]
A Logical Pattern Memory Pre-trained Model for Entailment Tree Generation
lrec-main.68
Poster
2403.06410
[ "https://github.com/yuanli95/t5-lmpm" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.lrec-main.69.bib
https://aclanthology.org/2024.lrec-main.69/
@inproceedings{li-etal-2024-alphafin, title = "{A}lpha{F}in: Benchmarking Financial Analysis with Retrieval-Augmented Stock-Chain Framework", author = "Li, Xiang and Li, Zhenyu and Shi, Chen and Xu, Yong and Du, Qing and Tan, Mingkui and Huang, Jun", editor = "Calzolari, Nicoletta and Kan, Min-Yen and Hoste, Veronique and Lenci, Alessandro and Sakti, Sakriani and Xue, Nianwen", booktitle = "Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)", month = may, year = "2024", address = "Torino, Italia", publisher = "ELRA and ICCL", url = "https://aclanthology.org/2024.lrec-main.69", pages = "773--783", abstract = "The task of financial analysis primarily encompasses two key areas: stock trend prediction and the corresponding financial question answering. Currently, machine learning and deep learning algorithms (ML{\&}DL) have been widely applied for stock trend predictions, leading to significant progress. However, these methods fail to provide reasons for predictions, lacking interpretability and reasoning processes. Also, they can not integrate textual information such as financial news or reports. Meanwhile, large language models (LLM) have remarkable textual understanding and generation ability. But due to the scarcity of financial training datasets and limited integration with real-time knowledge, LLM still suffer from hallucinations and unable to keep up with the latest information. To tackle these challenges, we first release AlphaFin datasets, combining traditional research datasets, real-time financial data, and handwritten chain-of-thought (CoT) data. It has positive impact on training LLM for completing financial analysis. We then use AlphaFin datasets to benchmark a state-of-the-art method, called Stock-Chain, for effectively tackling the financial analysis task, which integrates retrieval-augmented generation (RAG) techniques. Extensive experiments are conducted to demonstrate the effectiveness of our framework on financial analysis.", }
The task of financial analysis primarily encompasses two key areas: stock trend prediction and the corresponding financial question answering. Currently, machine learning and deep learning algorithms (ML{\&}DL) have been widely applied for stock trend predictions, leading to significant progress. However, these methods fail to provide reasons for predictions, lacking interpretability and reasoning processes. Also, they can not integrate textual information such as financial news or reports. Meanwhile, large language models (LLM) have remarkable textual understanding and generation ability. But due to the scarcity of financial training datasets and limited integration with real-time knowledge, LLM still suffer from hallucinations and unable to keep up with the latest information. To tackle these challenges, we first release AlphaFin datasets, combining traditional research datasets, real-time financial data, and handwritten chain-of-thought (CoT) data. It has positive impact on training LLM for completing financial analysis. We then use AlphaFin datasets to benchmark a state-of-the-art method, called Stock-Chain, for effectively tackling the financial analysis task, which integrates retrieval-augmented generation (RAG) techniques. Extensive experiments are conducted to demonstrate the effectiveness of our framework on financial analysis.
[ "Li, Xiang", "Li, Zhenyu", "Shi, Chen", "Xu, Yong", "Du, Qing", "Tan, Mingkui", "Huang, Jun" ]
AlphaFin: Benchmarking Financial Analysis with Retrieval-Augmented Stock-Chain Framework
lrec-main.69
Poster
2403.12582
[ "https://github.com/alphafin-proj/alphafin" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.lrec-main.70.bib
https://aclanthology.org/2024.lrec-main.70/
@inproceedings{anastasiou-etal-2024-luxembourgish, title = "A {L}uxembourgish Corpus as a Gender Bias Evaluation Testset", author = "Anastasiou, Dimitra and Blond-Hanten, Carole and Gallais, Marie", editor = "Calzolari, Nicoletta and Kan, Min-Yen and Hoste, Veronique and Lenci, Alessandro and Sakti, Sakriani and Xue, Nianwen", booktitle = "Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)", month = may, year = "2024", address = "Torino, Italia", publisher = "ELRA and ICCL", url = "https://aclanthology.org/2024.lrec-main.70", pages = "784--788", abstract = "According to the United Nations Development Programme, gender inequality is a metric that is composed of three dimensions: reproductive health, empowerment, and the labour market. Gender inequality is an obstacle to equal opportunities in society as a whole. In this paper we present our work-in-progress of designing and playing a physical game with digital elements. We currently conduct Conversation Analysis of transcribed speech of 58567 words and documenting bias. We also test OpenAI{'}s ChatGPT for bias in quiz-like gender-related questions.", }
According to the United Nations Development Programme, gender inequality is a metric that is composed of three dimensions: reproductive health, empowerment, and the labour market. Gender inequality is an obstacle to equal opportunities in society as a whole. In this paper we present our work-in-progress of designing and playing a physical game with digital elements. We currently conduct Conversation Analysis of transcribed speech of 58567 words and documenting bias. We also test OpenAI{'}s ChatGPT for bias in quiz-like gender-related questions.
[ "Anastasiou, Dimitra", "Blond-Hanten, Carole", "Gallais, Marie" ]
A Luxembourgish Corpus as a Gender Bias Evaluation Testset
lrec-main.70
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.lrec-main.71.bib
https://aclanthology.org/2024.lrec-main.71/
@inproceedings{bizzoni-etal-2024-matter, title = "A Matter of Perspective: Building a Multi-Perspective Annotated Dataset for the Study of Literary Quality", author = "Bizzoni, Yuri and Moreira, Pascale Feldkamp and Lassen, Ida Marie S. and Thomsen, Mads Rosendahl and Nielbo, Kristoffer", editor = "Calzolari, Nicoletta and Kan, Min-Yen and Hoste, Veronique and Lenci, Alessandro and Sakti, Sakriani and Xue, Nianwen", booktitle = "Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)", month = may, year = "2024", address = "Torino, Italia", publisher = "ELRA and ICCL", url = "https://aclanthology.org/2024.lrec-main.71", pages = "789--800", abstract = "Studies on literary quality have constantly stimulated the interest of critics, both in theoretical and empirical fields. To examine the perceived quality of literary works, some approaches have focused on data annotated through crowd-sourcing platforms, and others relied on available expert annotated data. In this work, we contribute to the debate by presenting a dataset collecting quality judgments on 9,000 19th and 20th century English-language literary novels by 3,150 predominantly Anglophone authors. We incorporate expert opinions and crowd-sourced annotations to allow comparative analyses between different literary quality evaluations. We also provide several textual metrics chosen for their potential connection with literary reception and engagement. While a large part of the texts is subjected to copyright, we release quality and reception measures together with stylometric and sentiment data for each of the 9,000 novels to promote future research and comparison.", }
Studies on literary quality have constantly stimulated the interest of critics, both in theoretical and empirical fields. To examine the perceived quality of literary works, some approaches have focused on data annotated through crowd-sourcing platforms, and others relied on available expert annotated data. In this work, we contribute to the debate by presenting a dataset collecting quality judgments on 9,000 19th and 20th century English-language literary novels by 3,150 predominantly Anglophone authors. We incorporate expert opinions and crowd-sourced annotations to allow comparative analyses between different literary quality evaluations. We also provide several textual metrics chosen for their potential connection with literary reception and engagement. While a large part of the texts is subjected to copyright, we release quality and reception measures together with stylometric and sentiment data for each of the 9,000 novels to promote future research and comparison.
[ "Bizzoni, Yuri", "Moreira, Pascale Feldkamp", "Lassen, Ida Marie S.", "Thomsen, Mads Rosendahl", "Nielbo, Kristoffer" ]
A Matter of Perspective: Building a Multi-Perspective Annotated Dataset for the Study of Literary Quality
lrec-main.71
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.lrec-main.72.bib
https://aclanthology.org/2024.lrec-main.72/
@inproceedings{gajbhiye-etal-2024-amended, title = "{AM}en{D}e{D}: Modelling Concepts by Aligning Mentions, Definitions and Decontextualised Embeddings", author = "Gajbhiye, Amit and Bouraoui, Zied and Espinosa Anke, Luis and Schockaert, Steven", editor = "Calzolari, Nicoletta and Kan, Min-Yen and Hoste, Veronique and Lenci, Alessandro and Sakti, Sakriani and Xue, Nianwen", booktitle = "Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)", month = may, year = "2024", address = "Torino, Italia", publisher = "ELRA and ICCL", url = "https://aclanthology.org/2024.lrec-main.72", pages = "801--811", abstract = "Contextualised Language Models (LM) improve on traditional word embeddings by encoding the meaning of words in context. However, such models have also made it possible to learn high-quality decontextualised concept embeddings. Three main strategies for learning such embeddings have thus far been considered: (i) fine-tuning the LM to directly predict concept embeddings from the name of the concept itself, (ii) averaging contextualised representations of mentions of the concept in a corpus, and (iii) encoding definitions of the concept. As these strategies have complementary strengths and weaknesses, we propose to learn a unified embedding space in which all three types of representations can be integrated. We show that this allows us to outperform existing approaches in tasks such as ontology completion, which heavily depends on access to high-quality concept embeddings. We furthermore find that mentions and definitions are well-aligned in the resulting space, enabling tasks such as target sense verification, even without the need for any fine-tuning.", }
Contextualised Language Models (LM) improve on traditional word embeddings by encoding the meaning of words in context. However, such models have also made it possible to learn high-quality decontextualised concept embeddings. Three main strategies for learning such embeddings have thus far been considered: (i) fine-tuning the LM to directly predict concept embeddings from the name of the concept itself, (ii) averaging contextualised representations of mentions of the concept in a corpus, and (iii) encoding definitions of the concept. As these strategies have complementary strengths and weaknesses, we propose to learn a unified embedding space in which all three types of representations can be integrated. We show that this allows us to outperform existing approaches in tasks such as ontology completion, which heavily depends on access to high-quality concept embeddings. We furthermore find that mentions and definitions are well-aligned in the resulting space, enabling tasks such as target sense verification, even without the need for any fine-tuning.
[ "Gajbhiye, Amit", "Bouraoui, Zied", "Espinosa Anke, Luis", "Schockaert, Steven" ]
AMenDeD: Modelling Concepts by Aligning Mentions, Definitions and Decontextualised Embeddings
lrec-main.72
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.lrec-main.73.bib
https://aclanthology.org/2024.lrec-main.73/
@inproceedings{icard-etal-2024-multi, title = "A Multi-Label Dataset of {F}rench Fake News: Human and Machine Insights", author = "Icard, Benjamin and Maine, Fran{\c{c}}ois and Casanova, Morgane and Faye, G{\'e}raud and Chanson, Julien and Gadek, Guillaume and Atemezing, Ghislain and Bancilhon, Fran{\c{c}}ois and {\'E}gr{\'e}, Paul", editor = "Calzolari, Nicoletta and Kan, Min-Yen and Hoste, Veronique and Lenci, Alessandro and Sakti, Sakriani and Xue, Nianwen", booktitle = "Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)", month = may, year = "2024", address = "Torino, Italia", publisher = "ELRA and ICCL", url = "https://aclanthology.org/2024.lrec-main.73", pages = "812--818", abstract = "We present a corpus of 100 documents, named OBSINFOX, selected from 17 sources of French press considered unreliable by expert agencies, annotated using 11 labels by 8 annotators. By collecting more labels than usual, by more annotators than is typically done, we can identify features that humans consider as characteristic of fake news, and compare them to the predictions of automated classifiers. We present a topic and genre analysis using Gate Cloud, indicative of the prevalence of satire-like text in the corpus. We then use the subjectivity analyzer VAGO, and a neural version of it, to clarify the link between ascriptions of the label Subjective and ascriptions of the label Fake News. The annotated dataset is available online at the following url: https://github.com/obs-info/obsinfox Keywords: Fake News, Multi-Labels, Subjectivity, Vagueness, Detail, Opinion, Exaggeration, French Press", }
We present a corpus of 100 documents, named OBSINFOX, selected from 17 sources of French press considered unreliable by expert agencies, annotated using 11 labels by 8 annotators. By collecting more labels than usual, by more annotators than is typically done, we can identify features that humans consider as characteristic of fake news, and compare them to the predictions of automated classifiers. We present a topic and genre analysis using Gate Cloud, indicative of the prevalence of satire-like text in the corpus. We then use the subjectivity analyzer VAGO, and a neural version of it, to clarify the link between ascriptions of the label Subjective and ascriptions of the label Fake News. The annotated dataset is available online at the following url: https://github.com/obs-info/obsinfox Keywords: Fake News, Multi-Labels, Subjectivity, Vagueness, Detail, Opinion, Exaggeration, French Press
[ "Icard, Benjamin", "Maine, Fran{\\c{c}}ois", "Casanova, Morgane", "Faye, G{\\'e}raud", "Chanson, Julien", "Gadek, Guillaume", "Atemezing, Ghislain", "Bancilhon, Fran{\\c{c}}ois", "{\\'E}gr{\\'e}, Paul" ]
A Multi-Label Dataset of French Fake News: Human and Machine Insights
lrec-main.73
Poster
2403.16099
[ "https://github.com/obs-info/obsinfox" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.lrec-main.74.bib
https://aclanthology.org/2024.lrec-main.74/
@inproceedings{pensa-etal-2024-multi, title = "A Multi-layered Approach to Physical Commonsense Understanding: Creation and Evaluation of an {I}talian Dataset", author = "Pensa, Giulia and Altuna, Bego{\~n}a and Gonzalez-Dios, Itziar", editor = "Calzolari, Nicoletta and Kan, Min-Yen and Hoste, Veronique and Lenci, Alessandro and Sakti, Sakriani and Xue, Nianwen", booktitle = "Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)", month = may, year = "2024", address = "Torino, Italia", publisher = "ELRA and ICCL", url = "https://aclanthology.org/2024.lrec-main.74", pages = "819--831", abstract = "In this paper, we explore physical commonsense reasoning of large language models (LLMs) and propose a specific methodology to evaluate low-level understanding of the physical world. Specifically, the goal is to create a test set to analyze physical commonsense reasoning in large language models for Italian and focus on a trustworthy analysis of the results. To that end, we present a tiered Italian dataset, called Graded Italian Annotated dataset (GITA), written and thoroughly annotated by a professional linguist, which allows us to concentrate on three different levels of commonsense understanding. Moreover, we create a semi-automated system to complete the accurate annotation of the dataset. We also validate our dataset by carrying out three tasks with a multilingual model (XLM-RoBERTa) and propose a qualitative analysis of the results. We found out that, although the model may perform at high-level classification tasks, its easoning is inconsistent and unverifiable, since it does not capture intermediate evidence.", }
In this paper, we explore physical commonsense reasoning of large language models (LLMs) and propose a specific methodology to evaluate low-level understanding of the physical world. Specifically, the goal is to create a test set to analyze physical commonsense reasoning in large language models for Italian and focus on a trustworthy analysis of the results. To that end, we present a tiered Italian dataset, called Graded Italian Annotated dataset (GITA), written and thoroughly annotated by a professional linguist, which allows us to concentrate on three different levels of commonsense understanding. Moreover, we create a semi-automated system to complete the accurate annotation of the dataset. We also validate our dataset by carrying out three tasks with a multilingual model (XLM-RoBERTa) and propose a qualitative analysis of the results. We found out that, although the model may perform at high-level classification tasks, its easoning is inconsistent and unverifiable, since it does not capture intermediate evidence.
[ "Pensa, Giulia", "Altuna, Bego{\\~n}a", "Gonzalez-Dios, Itziar" ]
A Multi-layered Approach to Physical Commonsense Understanding: Creation and Evaluation of an Italian Dataset
lrec-main.74
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.lrec-main.75.bib
https://aclanthology.org/2024.lrec-main.75/
@inproceedings{petrariu-nisioi-2024-multilingual, title = "A Multilingual Parallel Corpus for {A}romanian", author = "Petrariu, Iulia and Nisioi, Sergiu", editor = "Calzolari, Nicoletta and Kan, Min-Yen and Hoste, Veronique and Lenci, Alessandro and Sakti, Sakriani and Xue, Nianwen", booktitle = "Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)", month = may, year = "2024", address = "Torino, Italia", publisher = "ELRA and ICCL", url = "https://aclanthology.org/2024.lrec-main.75", pages = "832--838", abstract = "We report the creation of the first high-quality corpus of Aromanian - an endangered Romance language spoken in the Balkans - and the equivalent sentence-aligned translations into Romanian, English, and French. The corpus is released publicly using several orthographic standards and consists in short stories collected in the {`}70s in Romania. Additionally, we provide an corpus-based analysis of Aromanian linguistic particularities and the overall demographic and political context which impacts the contemporary development of the language.", }
We report the creation of the first high-quality corpus of Aromanian - an endangered Romance language spoken in the Balkans - and the equivalent sentence-aligned translations into Romanian, English, and French. The corpus is released publicly using several orthographic standards and consists in short stories collected in the {`}70s in Romania. Additionally, we provide an corpus-based analysis of Aromanian linguistic particularities and the overall demographic and political context which impacts the contemporary development of the language.
[ "Petrariu, Iulia", "Nisioi, Sergiu" ]
A Multilingual Parallel Corpus for Aromanian
lrec-main.75
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.lrec-main.76.bib
https://aclanthology.org/2024.lrec-main.76/
@inproceedings{macaire-etal-2024-multimodal, title = "A Multimodal {F}rench Corpus of Aligned Speech, Text, and Pictogram Sequences for Speech-to-Pictogram Machine Translation", author = "Macaire, C{\'e}cile and Dion, Chlo{\'e} and Arrigo, Jordan and Lemaire, Claire and Esperan{\c{c}}a-Rodier, Emmanuelle and Lecouteux, Benjamin and Schwab, Didier", editor = "Calzolari, Nicoletta and Kan, Min-Yen and Hoste, Veronique and Lenci, Alessandro and Sakti, Sakriani and Xue, Nianwen", booktitle = "Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)", month = may, year = "2024", address = "Torino, Italia", publisher = "ELRA and ICCL", url = "https://aclanthology.org/2024.lrec-main.76", pages = "839--849", abstract = "The automatic translation of spoken language into pictogram units can facilitate communication involving individuals with language impairments. However, there is no established translation formalism or publicly available datasets for training end-to-end speech translation systems. This paper introduces the first aligned speech, text, and pictogram translation dataset ever created in any language. We provide a French dataset that contains 230 hours of speech resources. We create a rule-based pictogram grammar with a restricted vocabulary and include a discussion of the strategic decisions involved. It takes advantage of an in-depth linguistic study of resources taken from the ARASAAC website. We validate these rules through multiple post-editing phases by expert annotators. The constructed dataset is then used to experiment with a Speech-to-Pictogram cascade model, which employs state-of-the-art Automatic Speech Recognition models. The dataset is freely available under a non-commercial licence. This marks a starting point to conduct research into the automatic translation of speech into pictogram units.", }
The automatic translation of spoken language into pictogram units can facilitate communication involving individuals with language impairments. However, there is no established translation formalism or publicly available datasets for training end-to-end speech translation systems. This paper introduces the first aligned speech, text, and pictogram translation dataset ever created in any language. We provide a French dataset that contains 230 hours of speech resources. We create a rule-based pictogram grammar with a restricted vocabulary and include a discussion of the strategic decisions involved. It takes advantage of an in-depth linguistic study of resources taken from the ARASAAC website. We validate these rules through multiple post-editing phases by expert annotators. The constructed dataset is then used to experiment with a Speech-to-Pictogram cascade model, which employs state-of-the-art Automatic Speech Recognition models. The dataset is freely available under a non-commercial licence. This marks a starting point to conduct research into the automatic translation of speech into pictogram units.
[ "Macaire, C{\\'e}cile", "Dion, Chlo{\\'e}", "Arrigo, Jordan", "Lemaire, Claire", "Esperan{\\c{c}}a-Rodier, Emmanuelle", "Lecouteux, Benjamin", "Schwab, Didier" ]
A Multimodal French Corpus of Aligned Speech, Text, and Pictogram Sequences for Speech-to-Pictogram Machine Translation
lrec-main.76
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.lrec-main.77.bib
https://aclanthology.org/2024.lrec-main.77/
@inproceedings{li-etal-2024-multimodal, title = "A Multimodal In-Context Tuning Approach for {E}-Commerce Product Description Generation", author = "Li, Yunxin and Hu, Baotian and Luo, Wenhan and Ma, Lin and Ding, Yuxin and Zhang, Min", editor = "Calzolari, Nicoletta and Kan, Min-Yen and Hoste, Veronique and Lenci, Alessandro and Sakti, Sakriani and Xue, Nianwen", booktitle = "Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)", month = may, year = "2024", address = "Torino, Italia", publisher = "ELRA and ICCL", url = "https://aclanthology.org/2024.lrec-main.77", pages = "850--861", abstract = "In this paper, we propose a new setting for generating product descriptions from images, augmented by marketing keywords. It leverages the combined power of visual and textual information to create descriptions that are more tailored to the unique features of products. For this setting, previous methods utilize visual and textual encoders to encode the image and keywords and employ a language model-based decoder to generate the product description. However, the generated description is often inaccurate and generic since same-category products have similar copy-writings, and optimizing the overall framework on large-scale samples makes models concentrate on common words yet ignore the product features. To alleviate the issue, we present a simple and effective Multimodal In-Context Tuning approach, named ModICT, which introduces a similar product sample as the reference and utilizes the in-context learning capability of language models to produce the description. During training, we keep the visual encoder and language model frozen, focusing on optimizing the modules responsible for creating multimodal in-context references and dynamic prompts. This approach preserves the language generation prowess of large language models (LLMs), facilitating a substantial increase in description diversity. To assess the effectiveness of ModICT across various language model scales and types, we collect data from three distinct product categories within the E-commerce domain. Extensive experiments demonstrate that ModICT significantly improves the accuracy (by up to 3.3{\%} on Rouge-L) and diversity (by up to 9.4{\%} on D-5) of generated results compared to conventional methods. Our findings underscore the potential of ModICT as a valuable tool for enhancing the automatic generation of product descriptions in a wide range of applications. Data and code are at https://github.com/HITsz-TMG/Multimodal-In-Context-Tuning", }
In this paper, we propose a new setting for generating product descriptions from images, augmented by marketing keywords. It leverages the combined power of visual and textual information to create descriptions that are more tailored to the unique features of products. For this setting, previous methods utilize visual and textual encoders to encode the image and keywords and employ a language model-based decoder to generate the product description. However, the generated description is often inaccurate and generic since same-category products have similar copy-writings, and optimizing the overall framework on large-scale samples makes models concentrate on common words yet ignore the product features. To alleviate the issue, we present a simple and effective Multimodal In-Context Tuning approach, named ModICT, which introduces a similar product sample as the reference and utilizes the in-context learning capability of language models to produce the description. During training, we keep the visual encoder and language model frozen, focusing on optimizing the modules responsible for creating multimodal in-context references and dynamic prompts. This approach preserves the language generation prowess of large language models (LLMs), facilitating a substantial increase in description diversity. To assess the effectiveness of ModICT across various language model scales and types, we collect data from three distinct product categories within the E-commerce domain. Extensive experiments demonstrate that ModICT significantly improves the accuracy (by up to 3.3{\%} on Rouge-L) and diversity (by up to 9.4{\%} on D-5) of generated results compared to conventional methods. Our findings underscore the potential of ModICT as a valuable tool for enhancing the automatic generation of product descriptions in a wide range of applications. Data and code are at https://github.com/HITsz-TMG/Multimodal-In-Context-Tuning
[ "Li, Yunxin", "Hu, Baotian", "Luo, Wenhan", "Ma, Lin", "Ding, Yuxin", "Zhang, Min" ]
A Multimodal In-Context Tuning Approach for E-Commerce Product Description Generation
lrec-main.77
Poster
2402.13587
[ "https://github.com/hitsz-tmg/multimodal-in-context-tuning" ]
https://huggingface.co./papers/2402.13587
1
0
0
6
1
[]
[ "YunxinLi/MD2T" ]
[]
https://aclanthology.org/2024.lrec-main.78.bib
https://aclanthology.org/2024.lrec-main.78/
@inproceedings{zhu-etal-2024-multi, title = "A Multi-Task Transformer Model for Fine-grained Labelling of Chest {X}-Ray Reports", author = "Zhu, Yuanyi and Liakata, Maria and Montana, Giovanni", editor = "Calzolari, Nicoletta and Kan, Min-Yen and Hoste, Veronique and Lenci, Alessandro and Sakti, Sakriani and Xue, Nianwen", booktitle = "Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)", month = may, year = "2024", address = "Torino, Italia", publisher = "ELRA and ICCL", url = "https://aclanthology.org/2024.lrec-main.78", pages = "862--875", abstract = "Precise understanding of free-text radiology reports through localised extraction of clinical findings can enhance medical imaging applications like computer-aided diagnosis. We present a new task, that of segmenting radiology reports into topically meaningful passages (segments) and a transformer-based model that both segments reports into semantically coherent segments and classifies each segment using a set of 37 radiological abnormalities, thus enabling fine-grained analysis. This contrasts with prior work that performs classification on full reports without localisation. Trained on over 2.7 million unlabelled chest X-ray reports and over 28k segmented and labelled reports, our model achieves state-of-the-art performance on report segmentation (0.0442 WinDiff) and multi-label classification (0.84 report-level macro F1) over 37 radiological labels and 8 NLP-specific labels. This work establishes new benchmarks for fine-grained understanding of free-text radiology reports, with precise localisation of semantics unlocking new opportunities to improve computer vision model training and clinical decision support. We open-source our annotation tool, model code and pretrained weights to encourage future research.", }
Precise understanding of free-text radiology reports through localised extraction of clinical findings can enhance medical imaging applications like computer-aided diagnosis. We present a new task, that of segmenting radiology reports into topically meaningful passages (segments) and a transformer-based model that both segments reports into semantically coherent segments and classifies each segment using a set of 37 radiological abnormalities, thus enabling fine-grained analysis. This contrasts with prior work that performs classification on full reports without localisation. Trained on over 2.7 million unlabelled chest X-ray reports and over 28k segmented and labelled reports, our model achieves state-of-the-art performance on report segmentation (0.0442 WinDiff) and multi-label classification (0.84 report-level macro F1) over 37 radiological labels and 8 NLP-specific labels. This work establishes new benchmarks for fine-grained understanding of free-text radiology reports, with precise localisation of semantics unlocking new opportunities to improve computer vision model training and clinical decision support. We open-source our annotation tool, model code and pretrained weights to encourage future research.
[ "Zhu, Yuanyi", "Liakata, Maria", "Montana, Giovanni" ]
A Multi-Task Transformer Model for Fine-grained Labelling of Chest X-Ray Reports
lrec-main.78
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.lrec-main.79.bib
https://aclanthology.org/2024.lrec-main.79/
@inproceedings{isaka-etal-2024-analysis, title = "Analysis of Sensation-transfer Dialogues in Motorsports", author = "Isaka, Takeru and Otsuka, Atsushi and Toshima, Iwaki", editor = "Calzolari, Nicoletta and Kan, Min-Yen and Hoste, Veronique and Lenci, Alessandro and Sakti, Sakriani and Xue, Nianwen", booktitle = "Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)", month = may, year = "2024", address = "Torino, Italia", publisher = "ELRA and ICCL", url = "https://aclanthology.org/2024.lrec-main.79", pages = "876--886", abstract = "Clarifying the effects of subjective ideas on group performance is essential for future dialogue systems to improve mutual understanding among humans and group creativity. However, there has been little focus on dialogue research on quantitatively analyzing the effects of the quality and quantity of subjective information contained in dialogues on group performance. We hypothesize that the more subjective information interlocutors exchange, the better the group performance in collaborative work. We collected dialogues between drivers and engineers in motorsports when deciding how the car should be tuned as a suitable case to verify this hypothesis. Our analysis suggests that the greater the amount of subjective information (which we defined as {``}sensation{''}) in the driver{'}s utterances, the greater the race performance and driver satisfaction with the car{'}s tuning. The results indicate that it is essential for the development of dialogue research to create a corpus of situations that require high performance through collaboration among experts with different backgrounds but who have mastered their respective fields.", }
Clarifying the effects of subjective ideas on group performance is essential for future dialogue systems to improve mutual understanding among humans and group creativity. However, there has been little focus on dialogue research on quantitatively analyzing the effects of the quality and quantity of subjective information contained in dialogues on group performance. We hypothesize that the more subjective information interlocutors exchange, the better the group performance in collaborative work. We collected dialogues between drivers and engineers in motorsports when deciding how the car should be tuned as a suitable case to verify this hypothesis. Our analysis suggests that the greater the amount of subjective information (which we defined as {``}sensation{''}) in the driver{'}s utterances, the greater the race performance and driver satisfaction with the car{'}s tuning. The results indicate that it is essential for the development of dialogue research to create a corpus of situations that require high performance through collaboration among experts with different backgrounds but who have mastered their respective fields.
[ "Isaka, Takeru", "Otsuka, Atsushi", "Toshima, Iwaki" ]
Analysis of Sensation-transfer Dialogues in Motorsports
lrec-main.79
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.lrec-main.80.bib
https://aclanthology.org/2024.lrec-main.80/
@inproceedings{tanigawa-etal-2024-analysis, title = "Analysis on Unsupervised Acquisition Process of Bilingual Vocabulary through Iterative Back-Translation", author = "Tanigawa, Takuma and Akiba, Tomoyosi and Tsukada, Hajime", editor = "Calzolari, Nicoletta and Kan, Min-Yen and Hoste, Veronique and Lenci, Alessandro and Sakti, Sakriani and Xue, Nianwen", booktitle = "Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)", month = may, year = "2024", address = "Torino, Italia", publisher = "ELRA and ICCL", url = "https://aclanthology.org/2024.lrec-main.80", pages = "887--892", abstract = "In this paper, we investigate how new bilingual vocabulary is acquired through Iterative Back-Translation (IBT), which is known as a data augmentation method for machine translation from monolingual data of both source and target languages. To reveal the acquisition process, we first identify the word translation pairs in test data that do not exist in a bilingual data but do only in two monolingual data, then observe how many pairs are successfully translated by the translation model trained through IBT. We experimented on it with domain adaptation settings on two language pairs. Our experimental evaluation showed that more than 60{\%} of the new bilingual vocabulary is successfully acquired through IBT along with the improvement in the translation quality in terms of BLEU. It also revealed that new bilingual vocabulary was gradually acquired by repeating IBT iterations. From the results, we present our hypothesis on the process of new bilingual vocabulary acquisition where the context of the words plays a critical role in the success of the acquisition.", }
In this paper, we investigate how new bilingual vocabulary is acquired through Iterative Back-Translation (IBT), which is known as a data augmentation method for machine translation from monolingual data of both source and target languages. To reveal the acquisition process, we first identify the word translation pairs in test data that do not exist in a bilingual data but do only in two monolingual data, then observe how many pairs are successfully translated by the translation model trained through IBT. We experimented on it with domain adaptation settings on two language pairs. Our experimental evaluation showed that more than 60{\%} of the new bilingual vocabulary is successfully acquired through IBT along with the improvement in the translation quality in terms of BLEU. It also revealed that new bilingual vocabulary was gradually acquired by repeating IBT iterations. From the results, we present our hypothesis on the process of new bilingual vocabulary acquisition where the context of the words plays a critical role in the success of the acquisition.
[ "Tanigawa, Takuma", "Akiba, Tomoyosi", "Tsukada, Hajime" ]
Analysis on Unsupervised Acquisition Process of Bilingual Vocabulary through Iterative Back-Translation
lrec-main.80
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.lrec-main.81.bib
https://aclanthology.org/2024.lrec-main.81/
@inproceedings{wang-etal-2024-analyzing, title = "Analyzing Chain-of-thought Prompting in Black-Box Large Language Models via Estimated {V}-information", author = "Wang, Zecheng and Li, Chunshan and Yang, Zhao and Liu, Qingbin and Hao, Yanchao and Chen, Xi and Chu, Dianhui and Sui, Dianbo", editor = "Calzolari, Nicoletta and Kan, Min-Yen and Hoste, Veronique and Lenci, Alessandro and Sakti, Sakriani and Xue, Nianwen", booktitle = "Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)", month = may, year = "2024", address = "Torino, Italia", publisher = "ELRA and ICCL", url = "https://aclanthology.org/2024.lrec-main.81", pages = "893--903", abstract = "Chain-of-Thought (CoT) prompting combined with large language models (LLM) has shown great potential in improving performance on challenging reasoning tasks. While understanding why CoT prompting is effective is crucial for the application and improvement of CoT prompting, few studies have addressed this issue. Besides, almost no prior work has conducted theoretical analysis on CoT prompting in the context of black-box models. In this paper, we approach the analysis of CoT prompting in black-box LLMs from an information-theoretic perspective. Specifically, we propose a new metric, EPVI (Estimated Pointwise V-Information), which extends the concept of pointwise V-information to black-box models, quantifying the label-relevant new information introduced by CoT prompting beyond the pre-existing information in the input. Based on this, we conduct a series of experiments at both the task and instance levels to analyze CoT prompting, demonstrating that the effectiveness of CoT prompting can be attributed to its capacity to influence the difficulty of model inference by augmenting or reducing the model-usable information. Furthermore, we show that selecting high-quality demonstrations of CoT reasoning based on EPVI can improve the downstream performance of reasoning tasks.", }
Chain-of-Thought (CoT) prompting combined with large language models (LLM) has shown great potential in improving performance on challenging reasoning tasks. While understanding why CoT prompting is effective is crucial for the application and improvement of CoT prompting, few studies have addressed this issue. Besides, almost no prior work has conducted theoretical analysis on CoT prompting in the context of black-box models. In this paper, we approach the analysis of CoT prompting in black-box LLMs from an information-theoretic perspective. Specifically, we propose a new metric, EPVI (Estimated Pointwise V-Information), which extends the concept of pointwise V-information to black-box models, quantifying the label-relevant new information introduced by CoT prompting beyond the pre-existing information in the input. Based on this, we conduct a series of experiments at both the task and instance levels to analyze CoT prompting, demonstrating that the effectiveness of CoT prompting can be attributed to its capacity to influence the difficulty of model inference by augmenting or reducing the model-usable information. Furthermore, we show that selecting high-quality demonstrations of CoT reasoning based on EPVI can improve the downstream performance of reasoning tasks.
[ "Wang, Zecheng", "Li, Chunshan", "Yang, Zhao", "Liu, Qingbin", "Hao, Yanchao", "Chen, Xi", "Chu, Dianhui", "Sui, Dianbo" ]
Analyzing Chain-of-thought Prompting in Black-Box Large Language Models via Estimated V-information
lrec-main.81
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.lrec-main.82.bib
https://aclanthology.org/2024.lrec-main.82/
@inproceedings{kiehne-etal-2024-analyzing, title = "Analyzing Effects of Learning Downstream Tasks on Moral Bias in Large Language Models", author = {Kiehne, Niklas and Ljapunov, Alexander and B{\"a}tje, Marc and Balke, Wolf-Tilo}, editor = "Calzolari, Nicoletta and Kan, Min-Yen and Hoste, Veronique and Lenci, Alessandro and Sakti, Sakriani and Xue, Nianwen", booktitle = "Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)", month = may, year = "2024", address = "Torino, Italia", publisher = "ELRA and ICCL", url = "https://aclanthology.org/2024.lrec-main.82", pages = "904--923", abstract = "Pre-training and fine-tuning large language models (LMs) is currently the state-of-the-art methodology for enabling data-scarce downstream tasks. However, the derived models still tend to replicate and perpetuate social biases. To understand this process in more detail, this paper investigates the actual effects of learning downstream tasks on moral bias in LMs. We develop methods to assess the agreement of LMs to explicitly codified norms in both pre-training and fine-tuning stages. Even if a pre-trained foundation model exhibits consistent norms, we find that introducing downstream tasks may indeed lead to unexpected inconsistencies in norm representation. Specifically, we observe two phenomena during fine-tuning across both masked and causal LMs: (1) pre-existing moral bias may be mitigated or amplified even when presented with opposing views and (2) prompt sensitivity may be negatively impacted. We provide empirical evidence of models deteriorating into conflicting states, where contradictory answers can easily be triggered by slight modifications in the input sequence. Our findings thus raise concerns about the general ability of LMs to mitigate moral biases effectively.", }
Pre-training and fine-tuning large language models (LMs) is currently the state-of-the-art methodology for enabling data-scarce downstream tasks. However, the derived models still tend to replicate and perpetuate social biases. To understand this process in more detail, this paper investigates the actual effects of learning downstream tasks on moral bias in LMs. We develop methods to assess the agreement of LMs to explicitly codified norms in both pre-training and fine-tuning stages. Even if a pre-trained foundation model exhibits consistent norms, we find that introducing downstream tasks may indeed lead to unexpected inconsistencies in norm representation. Specifically, we observe two phenomena during fine-tuning across both masked and causal LMs: (1) pre-existing moral bias may be mitigated or amplified even when presented with opposing views and (2) prompt sensitivity may be negatively impacted. We provide empirical evidence of models deteriorating into conflicting states, where contradictory answers can easily be triggered by slight modifications in the input sequence. Our findings thus raise concerns about the general ability of LMs to mitigate moral biases effectively.
[ "Kiehne, Niklas", "Ljapunov, Alex", "er", "B{\\\"a}tje, Marc", "Balke, Wolf-Tilo" ]
Analyzing Effects of Learning Downstream Tasks on Moral Bias in Large Language Models
lrec-main.82
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.lrec-main.83.bib
https://aclanthology.org/2024.lrec-main.83/
@inproceedings{proietti-etal-2024-analyzing, title = "Analyzing Homonymy Disambiguation Capabilities of Pretrained Language Models", author = "Proietti, Lorenzo and Perrella, Stefano and Tedeschi, Simone and Vulpis, Giulia and Lavalle, Leonardo and Sanchietti, Andrea and Ferrari, Andrea and Navigli, Roberto", editor = "Calzolari, Nicoletta and Kan, Min-Yen and Hoste, Veronique and Lenci, Alessandro and Sakti, Sakriani and Xue, Nianwen", booktitle = "Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)", month = may, year = "2024", address = "Torino, Italia", publisher = "ELRA and ICCL", url = "https://aclanthology.org/2024.lrec-main.83", pages = "924--938", abstract = "Word Sense Disambiguation (WSD) is a key task in Natural Language Processing (NLP), aiming to assign the correct meaning (sense) to a word in context. However, traditional WSD systems rely on WordNet as the underlying sense inventory, often differentiating meticulously between subtle nuances of word meanings, which may lead to excessive complexity and reduced practicality of WSD systems in today{'}s NLP. Indeed, current Pretrained Language Models (PLMs) do seem to be able to perform disambiguation, but it is not clear to what extent, or to what level of granularity, they actually operate. In this paper, we address these points and, firstly, introduce a new large-scale resource that leverages homonymy relations to systematically cluster WordNet senses, effectively reducing the granularity of word senses to a very coarse-grained level; secondly, we use this resource to train Homonymy Disambiguation systems and investigate whether PLMs are inherently able to differentiate coarse-grained word senses. Our findings demonstrate that, while state-of-the-art models still struggle to choose the correct fine-grained meaning of a word in context, Homonymy Disambiguation systems are able to differentiate homonyms with up to 95{\%} accuracy scores even without fine-tuning the underlying PLM. We release our data and code at https://github.com/SapienzaNLP/homonymy-wsd.", }
Word Sense Disambiguation (WSD) is a key task in Natural Language Processing (NLP), aiming to assign the correct meaning (sense) to a word in context. However, traditional WSD systems rely on WordNet as the underlying sense inventory, often differentiating meticulously between subtle nuances of word meanings, which may lead to excessive complexity and reduced practicality of WSD systems in today{'}s NLP. Indeed, current Pretrained Language Models (PLMs) do seem to be able to perform disambiguation, but it is not clear to what extent, or to what level of granularity, they actually operate. In this paper, we address these points and, firstly, introduce a new large-scale resource that leverages homonymy relations to systematically cluster WordNet senses, effectively reducing the granularity of word senses to a very coarse-grained level; secondly, we use this resource to train Homonymy Disambiguation systems and investigate whether PLMs are inherently able to differentiate coarse-grained word senses. Our findings demonstrate that, while state-of-the-art models still struggle to choose the correct fine-grained meaning of a word in context, Homonymy Disambiguation systems are able to differentiate homonyms with up to 95{\%} accuracy scores even without fine-tuning the underlying PLM. We release our data and code at https://github.com/SapienzaNLP/homonymy-wsd.
[ "Proietti, Lorenzo", "Perrella, Stefano", "Tedeschi, Simone", "Vulpis, Giulia", "Lavalle, Leonardo", "Sanchietti, Andrea", "Ferrari, Andrea", "Navigli, Roberto" ]
Analyzing Homonymy Disambiguation Capabilities of Pretrained Language Models
lrec-main.83
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.lrec-main.84.bib
https://aclanthology.org/2024.lrec-main.84/
@inproceedings{ikhwantri-etal-2024-analyzing, title = "Analyzing Interpretability of Summarization Model with Eye-gaze Information", author = "Ikhwantri, Fariz and Yamada, Hiroaki and Tokunaga, Takenobu", editor = "Calzolari, Nicoletta and Kan, Min-Yen and Hoste, Veronique and Lenci, Alessandro and Sakti, Sakriani and Xue, Nianwen", booktitle = "Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)", month = may, year = "2024", address = "Torino, Italia", publisher = "ELRA and ICCL", url = "https://aclanthology.org/2024.lrec-main.84", pages = "939--950", abstract = "Interpretation methods provide saliency scores indicating the importance of input words for neural summarization models. Prior work has analyzed models by comparing them to human behavior, often using eye-gaze as a proxy for human attention in reading tasks such as classification. This paper presents a framework to analyze the model behavior in summarization by comparing it to human summarization behavior using eye-gaze data. We examine two research questions: RQ1) whether model saliency conforms to human gaze during summarization and RQ2) how model saliency and human gaze affect summarization performance. For RQ1, we measure conformity by calculating the correlation between model saliency and human fixation counts. For RQ2, we conduct ablation experiments removing words/sentences considered important by models or humans. Experiments on two datasets with human eye-gaze during summarization partially confirm that model saliency aligns with human gaze (RQ1). However, ablation experiments show that removing highly-attended words/sentences from the human gaze does not significantly degrade performance compared with the removal by the model saliency (RQ2).", }
Interpretation methods provide saliency scores indicating the importance of input words for neural summarization models. Prior work has analyzed models by comparing them to human behavior, often using eye-gaze as a proxy for human attention in reading tasks such as classification. This paper presents a framework to analyze the model behavior in summarization by comparing it to human summarization behavior using eye-gaze data. We examine two research questions: RQ1) whether model saliency conforms to human gaze during summarization and RQ2) how model saliency and human gaze affect summarization performance. For RQ1, we measure conformity by calculating the correlation between model saliency and human fixation counts. For RQ2, we conduct ablation experiments removing words/sentences considered important by models or humans. Experiments on two datasets with human eye-gaze during summarization partially confirm that model saliency aligns with human gaze (RQ1). However, ablation experiments show that removing highly-attended words/sentences from the human gaze does not significantly degrade performance compared with the removal by the model saliency (RQ2).
[ "Ikhwantri, Fariz", "Yamada, Hiroaki", "Tokunaga, Takenobu" ]
Analyzing Interpretability of Summarization Model with Eye-gaze Information
lrec-main.84
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.lrec-main.85.bib
https://aclanthology.org/2024.lrec-main.85/
@inproceedings{xiao-etal-2024-analyzing, title = "Analyzing Large Language Models{'} Capability in Location Prediction", author = "Xiao, Zhaomin and Huang, Yan and Blanco, Eduardo", editor = "Calzolari, Nicoletta and Kan, Min-Yen and Hoste, Veronique and Lenci, Alessandro and Sakti, Sakriani and Xue, Nianwen", booktitle = "Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)", month = may, year = "2024", address = "Torino, Italia", publisher = "ELRA and ICCL", url = "https://aclanthology.org/2024.lrec-main.85", pages = "951--958", abstract = "In this paper, we investigate and evaluate large language models{'} capability in location prediction. We present experimental results with four models{---}FLAN-T5, FLAN-UL2, FLAN-Alpaca, and ChatGPT{---}in various instruction finetuning and exemplar settings. We analyze whether taking into account the context{---}tweets published before and after the tweet mentioning a location{---}is beneficial. Additionally, we conduct an ablation study to explore whether instruction modification is beneficial. Lastly, our qualitative analysis sheds light on the errors made by the best-performing model.", }
In this paper, we investigate and evaluate large language models{'} capability in location prediction. We present experimental results with four models{---}FLAN-T5, FLAN-UL2, FLAN-Alpaca, and ChatGPT{---}in various instruction finetuning and exemplar settings. We analyze whether taking into account the context{---}tweets published before and after the tweet mentioning a location{---}is beneficial. Additionally, we conduct an ablation study to explore whether instruction modification is beneficial. Lastly, our qualitative analysis sheds light on the errors made by the best-performing model.
[ "Xiao, Zhaomin", "Huang, Yan", "Blanco, Eduardo" ]
Analyzing Large Language Models' Capability in Location Prediction
lrec-main.85
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.lrec-main.86.bib
https://aclanthology.org/2024.lrec-main.86/
@inproceedings{ibaraki-etal-2024-analyzing, title = "Analyzing Occupational Distribution Representation in {J}apanese Language Models", author = "Ibaraki, Katsumi and Wu, Winston and Wang, Lu and Mihalcea, Rada", editor = "Calzolari, Nicoletta and Kan, Min-Yen and Hoste, Veronique and Lenci, Alessandro and Sakti, Sakriani and Xue, Nianwen", booktitle = "Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)", month = may, year = "2024", address = "Torino, Italia", publisher = "ELRA and ICCL", url = "https://aclanthology.org/2024.lrec-main.86", pages = "959--973", abstract = "Recent advances in large language models (LLMs) have enabled users to generate fluent and seemingly convincing text. However, these models have uneven performance in different languages, which is also associated with undesirable societal biases toward marginalized populations. Specifically, there is relatively little work on Japanese models, despite it being the thirteenth most widely spoken language. In this work, we first develop three Japanese language prompts to probe LLMs{'} understanding of Japanese names and their association between gender and occupations. We then evaluate a variety of English, multilingual, and Japanese models, correlating the models{'} outputs with occupation statistics from the Japanese Census Bureau from the last 100 years. Our findings indicate that models can associate Japanese names with the correct gendered occupations when using constrained decoding. However, with sampling or greedy decoding, Japanese language models have a preference for a small set of stereotypically gendered occupations, and multilingual models, though trained on Japanese, are not always able to understand Japanese prompts.", }
Recent advances in large language models (LLMs) have enabled users to generate fluent and seemingly convincing text. However, these models have uneven performance in different languages, which is also associated with undesirable societal biases toward marginalized populations. Specifically, there is relatively little work on Japanese models, despite it being the thirteenth most widely spoken language. In this work, we first develop three Japanese language prompts to probe LLMs{'} understanding of Japanese names and their association between gender and occupations. We then evaluate a variety of English, multilingual, and Japanese models, correlating the models{'} outputs with occupation statistics from the Japanese Census Bureau from the last 100 years. Our findings indicate that models can associate Japanese names with the correct gendered occupations when using constrained decoding. However, with sampling or greedy decoding, Japanese language models have a preference for a small set of stereotypically gendered occupations, and multilingual models, though trained on Japanese, are not always able to understand Japanese prompts.
[ "Ibaraki, Katsumi", "Wu, Winston", "Wang, Lu", "Mihalcea, Rada" ]
Analyzing Occupational Distribution Representation in Japanese Language Models
lrec-main.86
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.lrec-main.87.bib
https://aclanthology.org/2024.lrec-main.87/
@inproceedings{agarwal-etal-2024-analyzing, title = "Analyzing Symptom-based Depression Level Estimation through the Prism of Psychiatric Expertise", author = {Agarwal, Navneet and Milintsevich, Kirill and Metivier, Lucie and Rotharmel, Maud and Dias, Ga{\"e}l and Dollfus, Sonia}, editor = "Calzolari, Nicoletta and Kan, Min-Yen and Hoste, Veronique and Lenci, Alessandro and Sakti, Sakriani and Xue, Nianwen", booktitle = "Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)", month = may, year = "2024", address = "Torino, Italia", publisher = "ELRA and ICCL", url = "https://aclanthology.org/2024.lrec-main.87", pages = "974--983", abstract = "The ever-growing number of people suffering from mental distress has motivated significant research initiatives towards automated depression estimation. Despite the multidisciplinary nature of the task, very few of these approaches include medical professionals in their research process, thus ignoring a vital source of domain knowledge. In this paper, we propose to bring the domain experts back into the loop and incorporate their knowledge within the gold-standard DAIC-WOZ dataset. In particular, we define a novel transformer-based architecture and analyse its performance in light of our expert annotations. Overall findings demonstrate a strong correlation between the psychological tendencies of medical professionals and the behavior of the proposed model, which additionally provides new state-of-the-art results.", }
The ever-growing number of people suffering from mental distress has motivated significant research initiatives towards automated depression estimation. Despite the multidisciplinary nature of the task, very few of these approaches include medical professionals in their research process, thus ignoring a vital source of domain knowledge. In this paper, we propose to bring the domain experts back into the loop and incorporate their knowledge within the gold-standard DAIC-WOZ dataset. In particular, we define a novel transformer-based architecture and analyse its performance in light of our expert annotations. Overall findings demonstrate a strong correlation between the psychological tendencies of medical professionals and the behavior of the proposed model, which additionally provides new state-of-the-art results.
[ "Agarwal, Navneet", "Milintsevich, Kirill", "Metivier, Lucie", "Rotharmel, Maud", "Dias, Ga{\\\"e}l", "Dollfus, Sonia" ]
Analyzing Symptom-based Depression Level Estimation through the Prism of Psychiatric Expertise
lrec-main.87
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.lrec-main.88.bib
https://aclanthology.org/2024.lrec-main.88/
@inproceedings{shiwakoti-etal-2024-analyzing, title = "Analyzing the Dynamics of Climate Change Discourse on {T}witter: A New Annotated Corpus and Multi-Aspect Classification", author = "Shiwakoti, Shuvam and Thapa, Surendrabikram and Rauniyar, Kritesh and Shah, Akshyat and Bhandari, Aashish and Naseem, Usman", editor = "Calzolari, Nicoletta and Kan, Min-Yen and Hoste, Veronique and Lenci, Alessandro and Sakti, Sakriani and Xue, Nianwen", booktitle = "Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)", month = may, year = "2024", address = "Torino, Italia", publisher = "ELRA and ICCL", url = "https://aclanthology.org/2024.lrec-main.88", pages = "984--994", abstract = "The discourse surrounding climate change on social media platforms has emerged as a significant avenue for understanding public sentiments, perspectives, and engagement with this critical global issue. The unavailability of publicly available datasets, coupled with ignoring the multi-aspect analysis of climate discourse on social media platforms, has underscored the necessity for further advancement in this area. To address this gap, in this paper, we present an extensive exploration of the intricate realm of climate change discourse on Twitter, leveraging a meticulously annotated \textit{ClimaConvo} dataset comprising 15,309 tweets. Our annotations encompass a rich spectrum, including aspects like relevance, stance, hate speech, the direction of hate, and humor, offering a nuanced understanding of the discourse dynamics. We address the challenges inherent in dissecting online climate discussions and detail our comprehensive annotation methodology. In addition to annotations, we conduct benchmarking assessments across various algorithms for six tasks: relevance detection, stance detection, hate speech identification, direction and target, and humor analysis. This assessment enhances our grasp of sentiment fluctuations and linguistic subtleties within the discourse. Our analysis extends to exploratory data examination, unveiling tweet distribution patterns, stance prevalence, and hate speech trends. Employing sophisticated topic modeling techniques uncovers underlying thematic clusters, providing insights into the diverse narrative threads woven within the discourse. The findings present a valuable resource for researchers, policymakers, and communicators seeking to navigate the intricacies of climate change discussions. The dataset and resources for this paper are available at https://github.com/shucoll/ClimaConvo.", }
The discourse surrounding climate change on social media platforms has emerged as a significant avenue for understanding public sentiments, perspectives, and engagement with this critical global issue. The unavailability of publicly available datasets, coupled with ignoring the multi-aspect analysis of climate discourse on social media platforms, has underscored the necessity for further advancement in this area. To address this gap, in this paper, we present an extensive exploration of the intricate realm of climate change discourse on Twitter, leveraging a meticulously annotated \textit{ClimaConvo} dataset comprising 15,309 tweets. Our annotations encompass a rich spectrum, including aspects like relevance, stance, hate speech, the direction of hate, and humor, offering a nuanced understanding of the discourse dynamics. We address the challenges inherent in dissecting online climate discussions and detail our comprehensive annotation methodology. In addition to annotations, we conduct benchmarking assessments across various algorithms for six tasks: relevance detection, stance detection, hate speech identification, direction and target, and humor analysis. This assessment enhances our grasp of sentiment fluctuations and linguistic subtleties within the discourse. Our analysis extends to exploratory data examination, unveiling tweet distribution patterns, stance prevalence, and hate speech trends. Employing sophisticated topic modeling techniques uncovers underlying thematic clusters, providing insights into the diverse narrative threads woven within the discourse. The findings present a valuable resource for researchers, policymakers, and communicators seeking to navigate the intricacies of climate change discussions. The dataset and resources for this paper are available at https://github.com/shucoll/ClimaConvo.
[ "Shiwakoti, Shuvam", "Thapa, Surendrabikram", "Rauniyar, Kritesh", "Shah, Akshyat", "Bh", "ari, Aashish", "Naseem, Usman" ]
Analyzing the Dynamics of Climate Change Discourse on Twitter: A New Annotated Corpus and Multi-Aspect Classification
lrec-main.88
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.lrec-main.89.bib
https://aclanthology.org/2024.lrec-main.89/
@inproceedings{haldar-hockenmaier-2024-analyzing, title = "Analyzing the Performance of Large Language Models on Code Summarization", author = "Haldar, Rajarshi and Hockenmaier, Julia", editor = "Calzolari, Nicoletta and Kan, Min-Yen and Hoste, Veronique and Lenci, Alessandro and Sakti, Sakriani and Xue, Nianwen", booktitle = "Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)", month = may, year = "2024", address = "Torino, Italia", publisher = "ELRA and ICCL", url = "https://aclanthology.org/2024.lrec-main.89", pages = "995--1008", abstract = "Large language models (LLMs) such as Llama 2 perform very well on tasks that involve both natural language and source code, particularly code summarization and code generation. We show that for the task of code summarization, the performance of these models on individual examples often depends on the amount of (subword) token overlap between the code and the corresponding reference natural language descriptions in the dataset. This token overlap arises because the reference descriptions in standard datasets (corresponding to docstrings in large code bases) are often highly similar to the names of the functions they describe. We also show that this token overlap occurs largely in the function names of the code and compare the relative performance of these models after removing function names versus removing code structure. We also show that using multiple evaluation metrics like BLEU and BERTScore gives us very little additional insight since these metrics are highly correlated with each other.", }
Large language models (LLMs) such as Llama 2 perform very well on tasks that involve both natural language and source code, particularly code summarization and code generation. We show that for the task of code summarization, the performance of these models on individual examples often depends on the amount of (subword) token overlap between the code and the corresponding reference natural language descriptions in the dataset. This token overlap arises because the reference descriptions in standard datasets (corresponding to docstrings in large code bases) are often highly similar to the names of the functions they describe. We also show that this token overlap occurs largely in the function names of the code and compare the relative performance of these models after removing function names versus removing code structure. We also show that using multiple evaluation metrics like BLEU and BERTScore gives us very little additional insight since these metrics are highly correlated with each other.
[ "Haldar, Rajarshi", "Hockenmaier, Julia" ]
Analyzing the Performance of Large Language Models on Code Summarization
lrec-main.89
Poster
2404.08018
[ "https://github.com/rajarshihaldar/analyze-llm-code-summarization" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.lrec-main.90.bib
https://aclanthology.org/2024.lrec-main.90/
@inproceedings{weller-di-marco-fraser-2024-analyzing, title = "Analyzing the Understanding of Morphologically Complex Words in Large Language Models", author = "Weller-Di Marco, Marion and Fraser, Alexander", editor = "Calzolari, Nicoletta and Kan, Min-Yen and Hoste, Veronique and Lenci, Alessandro and Sakti, Sakriani and Xue, Nianwen", booktitle = "Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)", month = may, year = "2024", address = "Torino, Italia", publisher = "ELRA and ICCL", url = "https://aclanthology.org/2024.lrec-main.90", pages = "1009--1020", abstract = "We empirically study the ability of a Large Language Model (gpt-3.5-turbo-instruct) to understand morphologically complex words. In our experiments, we looked at a variety of tasks to analyse German compounds with regard to compositional word formation and derivation, such as identifying the head noun of existing and novel compounds, identifying the shared verb stem between two words, or recognizing words constructed with inappropriately used derivation morphemes as invalid. Our results show that the language model is generally capable of solving most tasks, except for the task of identifying ill-formed word forms. While the model demonstrated a good overall understanding of complex words and their word-internal structure, the results also suggest that there is no formal knowledge of derivational rules, but rather an interpretation of the observed word parts to derive the meaning of a word.", }
We empirically study the ability of a Large Language Model (gpt-3.5-turbo-instruct) to understand morphologically complex words. In our experiments, we looked at a variety of tasks to analyse German compounds with regard to compositional word formation and derivation, such as identifying the head noun of existing and novel compounds, identifying the shared verb stem between two words, or recognizing words constructed with inappropriately used derivation morphemes as invalid. Our results show that the language model is generally capable of solving most tasks, except for the task of identifying ill-formed word forms. While the model demonstrated a good overall understanding of complex words and their word-internal structure, the results also suggest that there is no formal knowledge of derivational rules, but rather an interpretation of the observed word parts to derive the meaning of a word.
[ "Weller-Di Marco, Marion", "Fraser, Alex", "er" ]
Analyzing the Understanding of Morphologically Complex Words in Large Language Models
lrec-main.90
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.lrec-main.91.bib
https://aclanthology.org/2024.lrec-main.91/
@inproceedings{przepiorkowski-etal-2024-argument, title = "An Argument for Symmetric Coordination from Dependency Length Minimization: A Replication Study", author = "Przepi{\'o}rkowski, Adam and Borysiak, Magdalena and G{\l}owacki, Adam", editor = "Calzolari, Nicoletta and Kan, Min-Yen and Hoste, Veronique and Lenci, Alessandro and Sakti, Sakriani and Xue, Nianwen", booktitle = "Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)", month = may, year = "2024", address = "Torino, Italia", publisher = "ELRA and ICCL", url = "https://aclanthology.org/2024.lrec-main.91", pages = "1021--1033", abstract = "It is well known that left conjuncts tend to be shorter in English coordinate structures. On the basis of Penn Treebank, Przepi{\'o}rkowski and Wo{\'z}niak 2023 (in ACL 2023 proceedings) show that this tendency depends on the difference between lengths of conjuncts: the larger the difference, the stronger the tendency for the shorter conjunct to occur on the left. However, this dynamics is observed only when the governor of the coordinate structure is on the left of the coordination (e.g., {``}Bring apples and oranges!{''}) or when it is absent (e.g., {``}Come and sing!{''}), and not when it is on the right (e.g., {``}Apples and oranges fell{''}). Given the principle of Dependency Length Minimization, this turns out to provide an argument for the symmetric structure of coordination. We replicate and sharpen this result on the basis of a much larger dataset: parts of the COCA corpus parsed with Stanza. We also investigate the dependence of this result on the assumed unit of length (word vs. character) and on genre.", }
It is well known that left conjuncts tend to be shorter in English coordinate structures. On the basis of Penn Treebank, Przepi{\'o}rkowski and Wo{\'z}niak 2023 (in ACL 2023 proceedings) show that this tendency depends on the difference between lengths of conjuncts: the larger the difference, the stronger the tendency for the shorter conjunct to occur on the left. However, this dynamics is observed only when the governor of the coordinate structure is on the left of the coordination (e.g., {``}Bring apples and oranges!{''}) or when it is absent (e.g., {``}Come and sing!{''}), and not when it is on the right (e.g., {``}Apples and oranges fell{''}). Given the principle of Dependency Length Minimization, this turns out to provide an argument for the symmetric structure of coordination. We replicate and sharpen this result on the basis of a much larger dataset: parts of the COCA corpus parsed with Stanza. We also investigate the dependence of this result on the assumed unit of length (word vs. character) and on genre.
[ "Przepi{\\'o}rkowski, Adam", "Borysiak, Magdalena", "G{\\l}owacki, Adam" ]
An Argument for Symmetric Coordination from Dependency Length Minimization: A Replication Study
lrec-main.91
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.lrec-main.92.bib
https://aclanthology.org/2024.lrec-main.92/
@inproceedings{shao-etal-2024-natural, title = "A Natural Approach for Synthetic Short-Form Text Analysis", author = "Shao, Ruiting and Schwarz, Ryan and Clifton, Christopher and Delp, Edward", editor = "Calzolari, Nicoletta and Kan, Min-Yen and Hoste, Veronique and Lenci, Alessandro and Sakti, Sakriani and Xue, Nianwen", booktitle = "Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)", month = may, year = "2024", address = "Torino, Italia", publisher = "ELRA and ICCL", url = "https://aclanthology.org/2024.lrec-main.92", pages = "1034--1042", abstract = "Detecting synthetically generated text in the wild has become increasingly difficult with advances in Natural Language Generation techniques and the proliferation of freely available Large Language Models (LLMs). Social media and news sites can be flooded with synthetically generated misinformation via tweets and posts while authentic users can inadvertently spread this text via shares and retweets. Most modern natural language processing techniques designed to detect synthetically generated text focus primarily on long-form content, such as news articles, or incorporate stylometric characteristics and metadata during their analysis. Unfortunately, for short form text like tweets, this information is often unavailable, usually detached from its original source, displayed out of context, and is often too short or informal to yield significant information from stylometry. This paper proposes a method of detecting synthetically generated tweets via a Transformer architecture and incorporating unique style-based features. Additionally, we have created a new dataset consisting of human-generated and Large Language Model generated tweets for 4 topics and another dataset consisting of tweets paraphrased by 3 different paraphrase models.", }
Detecting synthetically generated text in the wild has become increasingly difficult with advances in Natural Language Generation techniques and the proliferation of freely available Large Language Models (LLMs). Social media and news sites can be flooded with synthetically generated misinformation via tweets and posts while authentic users can inadvertently spread this text via shares and retweets. Most modern natural language processing techniques designed to detect synthetically generated text focus primarily on long-form content, such as news articles, or incorporate stylometric characteristics and metadata during their analysis. Unfortunately, for short form text like tweets, this information is often unavailable, usually detached from its original source, displayed out of context, and is often too short or informal to yield significant information from stylometry. This paper proposes a method of detecting synthetically generated tweets via a Transformer architecture and incorporating unique style-based features. Additionally, we have created a new dataset consisting of human-generated and Large Language Model generated tweets for 4 topics and another dataset consisting of tweets paraphrased by 3 different paraphrase models.
[ "Shao, Ruiting", "Schwarz, Ryan", "Clifton, Christopher", "Delp, Edward" ]
A Natural Approach for Synthetic Short-Form Text Analysis
lrec-main.92
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.lrec-main.93.bib
https://aclanthology.org/2024.lrec-main.93/
@inproceedings{gunduz-etal-2024-automated, title = "An Automated End-to-End Open-Source Software for High-Quality Text-to-Speech Dataset Generation", author = "Gunduz, Ahmet and Yuksel, Kamer Ali and Darwish, Kareem and Javadi, Golara and Minazzi, Fabio and Sobieski, Nicola and Brati{\`e}res, S{\'e}bastien", editor = "Calzolari, Nicoletta and Kan, Min-Yen and Hoste, Veronique and Lenci, Alessandro and Sakti, Sakriani and Xue, Nianwen", booktitle = "Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)", month = may, year = "2024", address = "Torino, Italia", publisher = "ELRA and ICCL", url = "https://aclanthology.org/2024.lrec-main.93", pages = "1043--1051", abstract = "Data availability is crucial for advancing artificial intelligence applications, including voice-based technologies. As content creation, particularly in social media, experiences increasing demand, translation and text-to-speech (TTS) technologies have become essential tools. Notably, the performance of these TTS technologies is highly dependent on the quality of the training data, emphasizing the mutual dependence of data availability and technological progress. This paper introduces an end-to-end tool to generate high-quality datasets for text-to-speech (TTS) models to address this critical need for high-quality data. The contributions of this work are manifold and include: the integration of language-specific phoneme distribution into sample selection, automation of the recording process, automated and human-in-the-loop quality assurance of recordings, and processing of recordings to meet specified formats. The proposed application aims to streamline the dataset creation process for TTS models through these features, thereby facilitating advancements in voice-based technologies.", }
Data availability is crucial for advancing artificial intelligence applications, including voice-based technologies. As content creation, particularly in social media, experiences increasing demand, translation and text-to-speech (TTS) technologies have become essential tools. Notably, the performance of these TTS technologies is highly dependent on the quality of the training data, emphasizing the mutual dependence of data availability and technological progress. This paper introduces an end-to-end tool to generate high-quality datasets for text-to-speech (TTS) models to address this critical need for high-quality data. The contributions of this work are manifold and include: the integration of language-specific phoneme distribution into sample selection, automation of the recording process, automated and human-in-the-loop quality assurance of recordings, and processing of recordings to meet specified formats. The proposed application aims to streamline the dataset creation process for TTS models through these features, thereby facilitating advancements in voice-based technologies.
[ "Gunduz, Ahmet", "Yuksel, Kamer Ali", "Darwish, Kareem", "Javadi, Golara", "Minazzi, Fabio", "Sobieski, Nicola", "Brati{\\`e}res, S{\\'e}bastien" ]
An Automated End-to-End Open-Source Software for High-Quality Text-to-Speech Dataset Generation
lrec-main.93
Poster
2402.16380
[ "https://github.com/aixplain/tts-qa" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.lrec-main.94.bib
https://aclanthology.org/2024.lrec-main.94/
@inproceedings{sun-xue-2024-anchor, title = "Anchor and Broadcast: An Efficient Concept Alignment Approach for Evaluation of Semantic Graphs", author = "Sun, Haibo and Xue, Nianwen", editor = "Calzolari, Nicoletta and Kan, Min-Yen and Hoste, Veronique and Lenci, Alessandro and Sakti, Sakriani and Xue, Nianwen", booktitle = "Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)", month = may, year = "2024", address = "Torino, Italia", publisher = "ELRA and ICCL", url = "https://aclanthology.org/2024.lrec-main.94", pages = "1052--1062", abstract = "In this paper, we present AnCast, an intuitive and efficient tool for evaluating graph-based meaning representations (MR). AnCast implements evaluation metrics that are well understood in the NLP community, and they include concept F1, unlabeled relation F1, labeled relation F1, and weighted relation F1. The efficiency of the tool comes from a novel anchor broadcast alignment algorithm that is not subject to the trappings of local maxima. We show through experimental results that the AnCast score is highly correlated with the widely used Smatch score, but its computation takes only about 40{\%} the time.", }
In this paper, we present AnCast, an intuitive and efficient tool for evaluating graph-based meaning representations (MR). AnCast implements evaluation metrics that are well understood in the NLP community, and they include concept F1, unlabeled relation F1, labeled relation F1, and weighted relation F1. The efficiency of the tool comes from a novel anchor broadcast alignment algorithm that is not subject to the trappings of local maxima. We show through experimental results that the AnCast score is highly correlated with the widely used Smatch score, but its computation takes only about 40{\%} the time.
[ "Sun, Haibo", "Xue, Nianwen" ]
Anchor and Broadcast: An Efficient Concept Alignment Approach for Evaluation of Semantic Graphs
lrec-main.94
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.lrec-main.95.bib
https://aclanthology.org/2024.lrec-main.95/
@inproceedings{xu-etal-2024-effective, title = "An Effective Span-based Multimodal Named Entity Recognition with Consistent Cross-Modal Alignment", author = "Xu, Yongxiu and Xu, Hao and Huang, Heyan and Cui, Shiyao and Tang, Minghao and Wang, Longzheng and Xu, Hongbo", editor = "Calzolari, Nicoletta and Kan, Min-Yen and Hoste, Veronique and Lenci, Alessandro and Sakti, Sakriani and Xue, Nianwen", booktitle = "Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)", month = may, year = "2024", address = "Torino, Italia", publisher = "ELRA and ICCL", url = "https://aclanthology.org/2024.lrec-main.95", pages = "1063--1072", abstract = "With the increasing availability of multimodal content on social media, consisting primarily of text and images, multimodal named entity recognition (MNER) has gained a wide-spread attention. A fundamental challenge of MNER lies in effectively aligning different modalities. However, the majority of current approaches rely on word-based sequence labeling framework and align the image and text at inconsistent semantic levels (whole image-words or regions-words). This misalignment may lead to inferior entity recognition performance. To address this issue, we propose an effective span-based method, named SMNER, which achieves a more consistent multimodal alignment from the perspectives of information-theoretic and cross-modal interaction, respectively. Specifically, we first introduce a cross-modal information bottleneck module for the global-level multimodal alignment (whole image-whole text). This module aims to encourage the semantic distribution of the image to be closer to the semantic distribution of the text, which can enable the filtering out of visual noise. Next, we introduce a cross-modal attention module for the local-level multimodal alignment (regions-spans), which captures the correlations between regions in the image and spans in the text, enabling a more precise alignment of the two modalities. Extensive ex- periments conducted on two benchmark datasets demonstrate that SMNER outperforms the state-of-the-art baselines.", }
With the increasing availability of multimodal content on social media, consisting primarily of text and images, multimodal named entity recognition (MNER) has gained a wide-spread attention. A fundamental challenge of MNER lies in effectively aligning different modalities. However, the majority of current approaches rely on word-based sequence labeling framework and align the image and text at inconsistent semantic levels (whole image-words or regions-words). This misalignment may lead to inferior entity recognition performance. To address this issue, we propose an effective span-based method, named SMNER, which achieves a more consistent multimodal alignment from the perspectives of information-theoretic and cross-modal interaction, respectively. Specifically, we first introduce a cross-modal information bottleneck module for the global-level multimodal alignment (whole image-whole text). This module aims to encourage the semantic distribution of the image to be closer to the semantic distribution of the text, which can enable the filtering out of visual noise. Next, we introduce a cross-modal attention module for the local-level multimodal alignment (regions-spans), which captures the correlations between regions in the image and spans in the text, enabling a more precise alignment of the two modalities. Extensive ex- periments conducted on two benchmark datasets demonstrate that SMNER outperforms the state-of-the-art baselines.
[ "Xu, Yongxiu", "Xu, Hao", "Huang, Heyan", "Cui, Shiyao", "Tang, Minghao", "Wang, Longzheng", "Xu, Hongbo" ]
An Effective Span-based Multimodal Named Entity Recognition with Consistent Cross-Modal Alignment
lrec-main.95
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.lrec-main.96.bib
https://aclanthology.org/2024.lrec-main.96/
@inproceedings{omura-etal-2024-empirical, title = "An Empirical Study of Synthetic Data Generation for Implicit Discourse Relation Recognition", author = "Omura, Kazumasa and Cheng, Fei and Kurohashi, Sadao", editor = "Calzolari, Nicoletta and Kan, Min-Yen and Hoste, Veronique and Lenci, Alessandro and Sakti, Sakriani and Xue, Nianwen", booktitle = "Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)", month = may, year = "2024", address = "Torino, Italia", publisher = "ELRA and ICCL", url = "https://aclanthology.org/2024.lrec-main.96", pages = "1073--1085", abstract = "Implicit Discourse Relation Recognition (IDRR), which is the task of recognizing the semantic relation between given text spans that do not contain overt clues, is a long-standing and challenging problem. In particular, the paucity of training data for some error-prone discourse relations makes the problem even more challenging. To address this issue, we propose a method of generating synthetic data for IDRR using a large language model. The proposed method is summarized as two folds: extraction of confusing discourse relation pairs based on false negative rate and synthesis of data focused on the confusion. The key points of our proposed method are utilizing a confusion matrix and adopting two-stage prompting to obtain effective synthetic data. According to the proposed method, we generated synthetic data several times larger than training examples for some error-prone discourse relations and incorporated it into training. As a result of experiments, we achieved state-of-the-art macro-F1 performance thanks to the synthetic data without sacrificing micro-F1 performance and demonstrated its positive effects especially on recognizing some infrequent discourse relations.", }
Implicit Discourse Relation Recognition (IDRR), which is the task of recognizing the semantic relation between given text spans that do not contain overt clues, is a long-standing and challenging problem. In particular, the paucity of training data for some error-prone discourse relations makes the problem even more challenging. To address this issue, we propose a method of generating synthetic data for IDRR using a large language model. The proposed method is summarized as two folds: extraction of confusing discourse relation pairs based on false negative rate and synthesis of data focused on the confusion. The key points of our proposed method are utilizing a confusion matrix and adopting two-stage prompting to obtain effective synthetic data. According to the proposed method, we generated synthetic data several times larger than training examples for some error-prone discourse relations and incorporated it into training. As a result of experiments, we achieved state-of-the-art macro-F1 performance thanks to the synthetic data without sacrificing micro-F1 performance and demonstrated its positive effects especially on recognizing some infrequent discourse relations.
[ "Omura, Kazumasa", "Cheng, Fei", "Kurohashi, Sadao" ]
An Empirical Study of Synthetic Data Generation for Implicit Discourse Relation Recognition
lrec-main.96
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.lrec-main.97.bib
https://aclanthology.org/2024.lrec-main.97/
@inproceedings{supryadi-etal-2024-empirical, title = "An Empirical Study on the Robustness of Massively Multilingual Neural Machine Translation", author = "Supryadi, Supryadi and Pan, Leiyu and Xiong, Deyi", editor = "Calzolari, Nicoletta and Kan, Min-Yen and Hoste, Veronique and Lenci, Alessandro and Sakti, Sakriani and Xue, Nianwen", booktitle = "Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)", month = may, year = "2024", address = "Torino, Italia", publisher = "ELRA and ICCL", url = "https://aclanthology.org/2024.lrec-main.97", pages = "1086--1097", abstract = "Massively multilingual neural machine translation (MMNMT) has been proven to enhance the translation quality of low-resource languages. In this paper, we empirically investigate the translation robustness of Indonesian-Chinese translation in the face of various naturally occurring noise. To assess this, we create a robustness evaluation benchmark dataset for Indonesian-Chinese translation. This dataset is automatically translated into Chinese using four NLLB-200 models of different sizes. We conduct both automatic and human evaluations. Our in-depth analysis reveal the correlations between translation error types and the types of noise present, how these correlations change across different model sizes, and the relationships between automatic evaluation indicators and human evaluation indicators. The dataset is publicly available at https://github.com/tjunlp-lab/ID-ZH-MTRobustEval.", }
Massively multilingual neural machine translation (MMNMT) has been proven to enhance the translation quality of low-resource languages. In this paper, we empirically investigate the translation robustness of Indonesian-Chinese translation in the face of various naturally occurring noise. To assess this, we create a robustness evaluation benchmark dataset for Indonesian-Chinese translation. This dataset is automatically translated into Chinese using four NLLB-200 models of different sizes. We conduct both automatic and human evaluations. Our in-depth analysis reveal the correlations between translation error types and the types of noise present, how these correlations change across different model sizes, and the relationships between automatic evaluation indicators and human evaluation indicators. The dataset is publicly available at https://github.com/tjunlp-lab/ID-ZH-MTRobustEval.
[ "Supryadi, Supryadi", "Pan, Leiyu", "Xiong, Deyi" ]
An Empirical Study on the Robustness of Massively Multilingual Neural Machine Translation
lrec-main.97
Poster
2405.07673
[ "https://github.com/tjunlp-lab/id-zh-mtrobusteval" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.lrec-main.98.bib
https://aclanthology.org/2024.lrec-main.98/
@inproceedings{zhang-etal-2024-evaluation, title = "An Evaluation of {C}roatian {ASR} Models for {\v{C}}akavian Transcription", author = "Zhang, Shulin and Hale, John and Renwick, Margaret and Vrzi{\'c}, Zvjezdana and Langston, Keith", editor = "Calzolari, Nicoletta and Kan, Min-Yen and Hoste, Veronique and Lenci, Alessandro and Sakti, Sakriani and Xue, Nianwen", booktitle = "Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)", month = may, year = "2024", address = "Torino, Italia", publisher = "ELRA and ICCL", url = "https://aclanthology.org/2024.lrec-main.98", pages = "1098--1104", abstract = "To assist in the documentation of {\v{C}}akavian, an endangered language variety closely related to Croatian, we test four currently available ASR models that are trained with Croatian data and assess their performance in the transcription of {\v{C}}akavian audio data. We compare the models{'} word error rates, analyze the word-level error types, and showcase the most frequent Deletion and Substitution errors. The evaluation results indicate that the best-performing system for transcribing {\v{C}}akavian was a CTC-based variant of the Conformer model.", }
To assist in the documentation of {\v{C}}akavian, an endangered language variety closely related to Croatian, we test four currently available ASR models that are trained with Croatian data and assess their performance in the transcription of {\v{C}}akavian audio data. We compare the models{'} word error rates, analyze the word-level error types, and showcase the most frequent Deletion and Substitution errors. The evaluation results indicate that the best-performing system for transcribing {\v{C}}akavian was a CTC-based variant of the Conformer model.
[ "Zhang, Shulin", "Hale, John", "Renwick, Margaret", "Vrzi{\\'c}, Zvjezdana", "Langston, Keith" ]
An Evaluation of Croatian ASR Models for Čakavian Transcription
lrec-main.98
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.lrec-main.99.bib
https://aclanthology.org/2024.lrec-main.99/
@inproceedings{wu-etal-2024-event, title = "An Event-based Abductive Learning for Hard Time-sensitive Question Answering", author = "Wu, Shaojuan and Li, Jitong and Zhang, Xiaowang and Feng, Zhiyong", editor = "Calzolari, Nicoletta and Kan, Min-Yen and Hoste, Veronique and Lenci, Alessandro and Sakti, Sakriani and Xue, Nianwen", booktitle = "Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)", month = may, year = "2024", address = "Torino, Italia", publisher = "ELRA and ICCL", url = "https://aclanthology.org/2024.lrec-main.99", pages = "1105--1115", abstract = "Time-Sensitive Question Answering (TSQA) is to answer questions qualified for a certain timestamp based on the given document. It is split into easy and hard modes depending on whether the document contain time qualifiers mentioned in the question. While existing models have performed well on easy mode, their performance is significant reduced for answering hard time-sensitive questions, whose time qualifiers are implicit in the document. An intuitive idea is to match temporal events in the given document by treating time-sensitive question as a temporal event of missing objects. However, not all temporal events extracted from the document have explicit time qualifiers. In this paper, we propose an Event-AL framework, in which a graph pruning model is designed to locate the timespan of implicit temporal events by capturing temporal relation between events. Moreover, we present an abductive reasoning module to determine proper objects while providing explanations. Besides, as the same relation may be scattered throughout the document in diverse expressions, a relation-based prompt is introduced to instructs LLMs in extracting candidate temporal events. We conduct extensive experiment and results show that Event-AL outperforms strong baselines for hard time-sensitive questions, with a 12.7{\%} improvement in EM scores. In addition, it also exhibits great superiority for multi-answer and beyond hard time-sensitive questions.", }
Time-Sensitive Question Answering (TSQA) is to answer questions qualified for a certain timestamp based on the given document. It is split into easy and hard modes depending on whether the document contain time qualifiers mentioned in the question. While existing models have performed well on easy mode, their performance is significant reduced for answering hard time-sensitive questions, whose time qualifiers are implicit in the document. An intuitive idea is to match temporal events in the given document by treating time-sensitive question as a temporal event of missing objects. However, not all temporal events extracted from the document have explicit time qualifiers. In this paper, we propose an Event-AL framework, in which a graph pruning model is designed to locate the timespan of implicit temporal events by capturing temporal relation between events. Moreover, we present an abductive reasoning module to determine proper objects while providing explanations. Besides, as the same relation may be scattered throughout the document in diverse expressions, a relation-based prompt is introduced to instructs LLMs in extracting candidate temporal events. We conduct extensive experiment and results show that Event-AL outperforms strong baselines for hard time-sensitive questions, with a 12.7{\%} improvement in EM scores. In addition, it also exhibits great superiority for multi-answer and beyond hard time-sensitive questions.
[ "Wu, Shaojuan", "Li, Jitong", "Zhang, Xiaowang", "Feng, Zhiyong" ]
An Event-based Abductive Learning for Hard Time-sensitive Question Answering
lrec-main.99
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.lrec-main.100.bib
https://aclanthology.org/2024.lrec-main.100/
@inproceedings{de-gibert-etal-2024-new, title = "A New Massive Multilingual Dataset for High-Performance Language Technologies", author = {de Gibert, Ona and Nail, Graeme and Arefyev, Nikolay and Ba{\~n}{\'o}n, Marta and van der Linde, Jelmer and Ji, Shaoxiong and Zaragoza-Bernabeu, Jaume and Aulamo, Mikko and Ram{\'\i}rez-S{\'a}nchez, Gema and Kutuzov, Andrey and Pyysalo, Sampo and Oepen, Stephan and Tiedemann, J{\"o}rg}, editor = "Calzolari, Nicoletta and Kan, Min-Yen and Hoste, Veronique and Lenci, Alessandro and Sakti, Sakriani and Xue, Nianwen", booktitle = "Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)", month = may, year = "2024", address = "Torino, Italia", publisher = "ELRA and ICCL", url = "https://aclanthology.org/2024.lrec-main.100", pages = "1116--1128", abstract = "We present the HPLT (High Performance Language Technologies) language resources, a new massive multilingual dataset including both monolingual and bilingual corpora extracted from CommonCrawl and previously unused web crawls from the Internet Archive. We describe our methods for data acquisition, management and processing of large corpora, which rely on open-source software tools and high-performance computing. Our monolingual collection focuses on low- to medium-resourced languages and covers 75 languages and a total of {\mbox{$\approx$}} 5.6 trillion word tokens de-duplicated on the document level. Our English-centric parallel corpus is derived from its monolingual counterpart and covers 18 language pairs and more than 96 million aligned sentence pairs with roughly 1.4 billion English tokens. The HPLT language resources are one of the largest open text corpora ever released, providing a great resource for language modeling and machine translation training. We publicly release the corpora, the software, and the tools used in this work.", }
We present the HPLT (High Performance Language Technologies) language resources, a new massive multilingual dataset including both monolingual and bilingual corpora extracted from CommonCrawl and previously unused web crawls from the Internet Archive. We describe our methods for data acquisition, management and processing of large corpora, which rely on open-source software tools and high-performance computing. Our monolingual collection focuses on low- to medium-resourced languages and covers 75 languages and a total of {\mbox{$\approx$}} 5.6 trillion word tokens de-duplicated on the document level. Our English-centric parallel corpus is derived from its monolingual counterpart and covers 18 language pairs and more than 96 million aligned sentence pairs with roughly 1.4 billion English tokens. The HPLT language resources are one of the largest open text corpora ever released, providing a great resource for language modeling and machine translation training. We publicly release the corpora, the software, and the tools used in this work.
[ "de Gibert, Ona", "Nail, Graeme", "Arefyev, Nikolay", "Ba{\\~n}{\\'o}n, Marta", "van der Linde, Jelmer", "Ji, Shaoxiong", "Zaragoza-Bernabeu, Jaume", "Aulamo, Mikko", "Ram{\\'\\i}rez-S{\\'a}nchez, Gema", "Kutuzov, Andrey", "Pyysalo, Sampo", "Oepen, Stephan", "Tiedemann, J{\\\"o}rg" ]
A New Massive Multilingual Dataset for High-Performance Language Technologies
lrec-main.100
Poster
2403.14009
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]