File size: 3,508 Bytes
ab5ccad 3f1bc66 f887682 08a49c8 f887682 ab5ccad cfb5954 ab5ccad 49918e9 cfb5954 26ba0b9 49918e9 cfb5954 49918e9 7de9a2e cfb5954 3f1bc66 cfb5954 05d1e04 ab5ccad 60a6ef5 ab5ccad 1df4ec1 ab5ccad 1df4ec1 ab5ccad 1df4ec1 ab5ccad 1df4ec1 ab5ccad cfb5954 ab5ccad 3f1bc66 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 |
---
pretty_name: Scientific Figures, Captions and Context
task_categories:
- visual-question-answering
- document-question-answering
language:
- en
size_categories:
- 100K<n<1M
configs:
- config_name: Data
data_files: merged.json
---
# Dataset Card for Scientific Figures, Captions, and Context
A novel vision-language dataset of scientific figures taken directly from research papers.
We scraped approximately ~150k papers, with about ~690k figures total. We extracted each figure's caption and label from the paper. In addition, we searched through each paper to find references of each figure and included the surrounding text as 'context' for this figure.
All figures were taken from arXiv research papers.
<figure>
<img width="500" src="example1.png">
<figcaption>Figure 5: Comparisons between our multifidelity learning paradigm and single low-fidelity (all GPT-3.5) annotation on four domain-specific tasks given the same total 1000 annotation budget. Note that the samples for all GPT-3.5 are drawn based on the uncertainty score.</figcaption>
</figure>
<figure>
<img width="500" src="example2.png">
<figcaption>Figure 3: Problem representation visualization by T- SNE. Our model with A&D improves the problem rep- resentation learning, which groups analogical problems close and separates non-analogical problems.</figurecaption>
</figure>
### Usage
The `merged.json` file is a mapping between the figure's filename as stored in the repository and its caption, label, and context.
To use, you must extract the parts located under dataset/figures/ and keep the raw images in the same directory so that they match the image_filename fields.
The images are named in the format ```<paper id>-<figure name>``` where paper id is the id given by arXiv and figure name is the name of the figure as given in the raw format of each paper.
# Contributors
Yousef Gomaa (@yousefg-codes) and Mohamed Awadalla (@mawadalla)
## Dataset Description
- **Paper:** coming soon
### Dataset Summary
This dataset includes ~690,000 figures from ~150,000 scientific papers taken from arXiv papers. Each object in the json file is a single research paper with a list of figures each with their caption and surrounding context.
| Category | Count |
|:-----------|--------:|
| Figure | 690883 |
| Paper | 152504 |
### Data Instances
An example of an object in the `merged.json` file:
```json
{
[
{
'image_filename': 'dataset/figures/example.png' (or .eps or .pdf or other type),
'label': 'fig_example',
'caption': 'an example caption for this figure',
'context': ['example context where this figure was referenced', 'up to 600 characters']
},
...
]
}
```
## Dataset Creation
We utilized the bulk access of arXiv's papers.
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Citation Information
coming soon |