--- pretty_name: Scientific Figures, Captions and Context task_categories: - visual-question-answering - document-question-answering language: - en size_categories: - 100K
Figure 5: Comparisons between our multifidelity learning paradigm and single low-fidelity (all GPT-3.5) annotation on four domain-specific tasks given the same total 1000 annotation budget. Note that the samples for all GPT-3.5 are drawn based on the uncertainty score.
Figure 3: Problem representation visualization by T- SNE. Our model with A&D improves the problem rep- resentation learning, which groups analogical problems close and separates non-analogical problems.
### Usage The `merged.json` file is a mapping between the figure's filename as stored in the repository and its caption, label, and context. To use, you must extract the parts located under dataset/figures/ and keep the raw images in the same directory so that they match the image_filename fields. The images are named in the format ```-
``` where paper id is the id given by arXiv and figure name is the name of the figure as given in the raw format of each paper. # Contributors Yousef Gomaa (@yousefg-codes) and Mohamed Awadalla (@mawadalla) ## Dataset Description - **Paper:** coming soon ### Dataset Summary This dataset includes ~690,000 figures from ~150,000 scientific papers taken from arXiv papers. Each object in the json file is a single research paper with a list of figures each with their caption and surrounding context. | Category | Count | |:-----------|--------:| | Figure | 690883 | | Paper | 152504 | ### Data Instances An example of an object in the `merged.json` file: ```json { [ { 'image_filename': 'dataset/figures/example.png' (or .eps or .pdf or other type), 'label': 'fig_example', 'caption': 'an example caption for this figure', 'context': ['example context where this figure was referenced', 'up to 600 characters'] }, ... ] } ``` ## Dataset Creation We utilized the bulk access of arXiv's papers. ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Citation Information coming soon