Datasets:

Modalities:
Image
Text
Formats:
parquet
ArXiv:
Libraries:
Datasets
Dask
License:
AToMiC-Images-v0.2 / README.md
justram's picture
update license
dac7bbc
---
dataset_info:
features:
- name: image_url
dtype: string
- name: image_id
dtype: string
- name: language
sequence: string
- name: caption_reference_description
sequence: string
- name: caption_alt_text_description
sequence: string
- name: caption_attribution_description
sequence: string
- name: image
dtype: image
splits:
- name: train
num_bytes: 180043531167.75
num_examples: 11019202
download_size: 174258428914
dataset_size: 180043531167.75
license: cc-by-sa-4.0
size_categories:
- 100M<n<1B
---
# Dataset Card for "AToMiC-All-Images_wi-pixels"
## Dataset Description
- **Homepage:** [AToMiC homepage](https://trec-atomic.github.io/)
- **Source:** [WIT](https://github.com/google-research-datasets/wit)
- **Paper:** [WIT: Wikipedia-based Image Text Dataset for Multimodal Multilingual Machine Learning](https://arxiv.org/abs/2103.01913)
### Languages
The dataset contains 108 languages in Wikipedia.
### Data Instances
Each instance is an image, its representation in bytes, and its associated captions.
### Intended Usage
1. Image collection for Text-to-Image retrieval
2. Image--Caption Retrieval/Generation/Translation
### Licensing Information
[CC BY-SA 4.0 international license](https://creativecommons.org/licenses/by-sa/4.0/)
### Citation Information
TBA
### Acknowledgement
Thanks to:
[img2dataset](https://github.com/rom1504/img2dataset)
[Datasets](https://github.com/huggingface/datasets)
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)