license: apache-2.0
task_categories:
- other
tags:
- topic-modeling
- llm-evaluation
- benchmark
- legislation
- wikipedia
Dataset Overview
This repository contains benchmark datasets for evaluating Large Language Model (LLM)-based topic discovery methods and comparing them against traditional topic models. These datasets provide a valuable resource for researchers studying topic modeling and LLM capabilities in this domain. The work is described in the following paper: Large Language Models Struggle to Describe the Haystack without Human Help: Human-in-the-loop Evaluation of LLMs. Original data source: GitHub
Bills Dataset
The Bills Dataset is a collection of legislative documents containing 32,661 bill summaries (train) from the 110th–114th U.S. Congresses, categorized into 21 top-level and 112 secondary-level topics. A test split of 15.2K summaries is also included.
Loading the Bills Dataset
from datasets import load_dataset
# Load the train and test splits
train_dataset = load_dataset('zli12321/Bills', split='train')
test_dataset = load_dataset('zli12321/Bills', split='test')
Wiki Dataset
The Wiki dataset consists of 14,290 articles spanning 15 high-level and 45 mid-level topics, including widely recognized public topics such as music and anime. A test split of 8.02K summaries is included.
Synthetic Science Fiction (Pending internal clearance process)
Please cite the relevant papers below if you find the data useful. Do not hesitate to create an issue or email us if you have problems!
Citation:
If you find LLM-based topic generation has hallucination or instability, and coherence not applicable to LLM-based topic models:
@misc{li2025largelanguagemodelsstruggle,
title={Large Language Models Struggle to Describe the Haystack without Human Help: Human-in-the-loop Evaluation of LLMs},
author={Zongxia Li and Lorena Calvo-Bartolomé and Alexander Hoyle and Paiheng Xu and Alden Dima and Juan Francisco Fung and Jordan Boyd-Graber},
year={2025},
eprint={2502.14748},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2502.14748},
}
If you use the human annotations or preprocessing:
@inproceedings{li-etal-2024-improving,
title = "Improving the {TENOR} of Labeling: Re-evaluating Topic Models for Content Analysis",
author = "Li, Zongxia and
Mao, Andrew and
Stephens, Daniel and
Goel, Pranav and
Walpole, Emily and
Dima, Alden and
Fung, Juan and
Boyd-Graber, Jordan",
editor = "Graham, Yvette and
Purver, Matthew",
booktitle = "Proceedings of the 18th Conference of the European Chapter of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = mar,
year = "2024",
address = "St. Julian{'}s, Malta",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.eacl-long.51/",
pages = "840--859"
}
If you want to use the claim coherence does not generalize to neural topic models:
@inproceedings{hoyle-etal-2021-automated,
title = "Is Automated Topic Evaluation Broken? The Incoherence of Coherence",
author = "Hoyle, Alexander Miserlis and
Goel, Pranav and
Hian-Cheong, Andrew and
Peskov, Denis and
Boyd-Graber, Jordan and
Resnik, Philip",
booktitle = "Advances in Neural Information Processing Systems",
year = "2021",
url = "https://arxiv.org/abs/2107.02173",
}
If you evaluate ground-truth evaluations or stability:
@inproceedings{hoyle-etal-2022-neural,
title = "Are Neural Topic Models Broken?",
author = "Hoyle, Alexander Miserlis and
Goel, Pranav and
Sarkar, Rupak and
Resnik, Philip",
booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2022",
year = "2022",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2022.findings-emnlp.390",
doi = "10.18653/v1/2022.findings-emnlp.390",
pages = "5321--5344",
}