The dataset viewer is not available because its heuristics could not detect any supported data files. You can try uploading some data files, or configuring the data files location manually.

📖 READoc

📄 Paper | 💻 Code | Current Version: v0.1

Dataset of READoc from the paper READoc: A Unified Benchmark for Realistic Document Structured Extraction.

Abstract

Document Structured Extraction (DSE) aims to extract structured content from raw documents. Despite the emergence of numerous DSE systems (e.g. Marker, Nougat, GPT-4), their unified evaluation remains inadequate, significantly hindering the field’s advancement. This problem is largely attributed to existing benchmark paradigms, which exhibit fragmented and localized characteristics. To address these limitations and offer a thorough evaluation of DSE systems, we introduce a novel benchmark named READoc, which defines DSE as a realistic task of converting unstructured PDFs into semantically rich Markdown. The READoc dataset is derived from 2,233 diverse and real-world documents from arXiv and GitHub. In addition, we develop a DSE Evaluation Suite comprising Standardization, Segmentation and Scoring modules, to conduct a unified evaluation of state-of-the-art DSE approaches. By evaluating a range of pipeline tools, expert visual models, and general VLMs, we identify the gap between current work and the unified, realistic DSE objective for the first time. We aspire that READoc will catalyze future research in DSE, fostering more comprehensive and practical solutions.

Note

Please note that we have not yet released the full dataset; For now, we are providing the complete set of PDFs here.

You can send me the Markdown files generated by you DSE systems, and I will calculate the evaluation scores for you. This evaluation flow will be improved in the future.

You can also look into our Github Repository, which contains a few sample documents as a reference.

Downloads last month
38