Search is not available for this dataset
image
image
YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co./docs/hub/datasets-cards)

ALIGN-BENCH is developed for measuring cross-modal alignment of vision-language models quantitatively.

The code can be found at https://github.com/IIGROUP/SCL.

The core idea is to utilize the cross-attention maps of the last layer in fusion encoder, and compare them with annotated regions corresponding to some words.

ALIGN-BENCH can calculate global-local and local-local alignment scores from two angles, bounding box and pixel mask.

There are 1,500 images and 1,500 annotation files in the dataset zip file. The annotatino file contains a caption and some words' regions (bounding box and pixel mask) on the image.

Downloads last month
8
Edit dataset card