Papers
arxiv:1810.09305

WikiHow: A Large Scale Text Summarization Dataset

Published on Oct 18, 2018
Authors:
,

Abstract

Sequence-to-sequence models have recently gained the state of the art performance in summarization. However, not too many large-scale high-quality datasets are available and almost all the available ones are mainly news articles with specific writing style. Moreover, abstractive human-style systems involving description of the content at a deeper level require data with higher levels of abstraction. In this paper, we present WikiHow, a dataset of more than 230,000 article and summary pairs extracted and constructed from an online knowledge base written by different human authors. The articles span a wide range of topics and therefore represent high diversity styles. We evaluate the performance of the existing methods on WikiHow to present its challenges and set some baselines to further improve it.

Community

Sign up or log in to comment

Models citing this paper 113

Browse 113 models citing this paper

Datasets citing this paper 2

Spaces citing this paper 3,294

Collections including this paper 0

No Collection including this paper

Add this paper to a collection to link it from this page.