Datasets:

Modalities:
Image
Text
Formats:
parquet
Languages:
English
Size:
< 1K
ArXiv:
Libraries:
Datasets
pandas
License:
PCA-Bench-V1 / README.md
leonardPKU's picture
Update README.md
eb089a3 verified
metadata
dataset_info:
  - config_name: Autonomous Driving
    features:
      - name: domain
        dtype: string
      - name: image
        dtype: image
      - name: question
        dtype: string
      - name: actions
        sequence: string
      - name: answer_index
        dtype: int64
      - name: reason
        dtype: string
      - name: key_concept
        sequence: string
      - name: question_prompt
        dtype: string
      - name: answer_with_reason
        dtype: string
      - name: full_meta_data_json
        dtype: string
    splits:
      - name: test_open
        num_bytes: 134659773
        num_examples: 100
      - name: test_closed
        num_bytes: 67549223
        num_examples: 150
    download_size: 270416985
    dataset_size: 202208996
  - config_name: Domestic Robot
    features:
      - name: domain
        dtype: string
      - name: image
        dtype: image
      - name: question
        dtype: string
      - name: actions
        sequence: string
      - name: answer_index
        dtype: int64
      - name: reason
        dtype: string
      - name: key_concept
        sequence: string
      - name: question_prompt
        dtype: string
      - name: answer_with_reason
        dtype: string
      - name: full_meta_data_json
        dtype: string
    splits:
      - name: test_open
        num_bytes: 91702060
        num_examples: 100
      - name: test_closed
        num_bytes: 177827577
        num_examples: 200
    download_size: 105390299
    dataset_size: 269529637
  - config_name: Open-World Game
    features:
      - name: domain
        dtype: string
      - name: image
        dtype: image
      - name: question
        dtype: string
      - name: actions
        sequence: string
      - name: answer_index
        dtype: int64
      - name: reason
        dtype: string
      - name: key_concept
        sequence: string
      - name: question_prompt
        dtype: string
      - name: answer_with_reason
        dtype: string
      - name: full_meta_data_json
        dtype: string
    splits:
      - name: test_open
        num_bytes: 16139511
        num_examples: 117
      - name: test_closed
        num_bytes: 19069366
        num_examples: 141
    download_size: 34988721
    dataset_size: 35208877
configs:
  - config_name: Autonomous Driving
    data_files:
      - split: test_open
        path: Autonomous Driving/test_open-*
      - split: test_closed
        path: Autonomous Driving/test_closed-*
  - config_name: Domestic Robot
    data_files:
      - split: test_open
        path: Domestic Robot/test_open-*
      - split: test_closed
        path: Domestic Robot/test_closed-*
  - config_name: Open-World Game
    data_files:
      - split: test_open
        path: Open-World Game/test_open-*
      - split: test_closed
        path: Open-World Game/test_closed-*
license: apache-2.0
task_categories:
  - multiple-choice
  - visual-question-answering
language:
  - en
pretty_name: PCA-Bench

PCA-Bench

Static Badge Static Badge Static Badge Static Badge

PCA-Bench is an innovative benchmark for evaluating and locating errors in Multimodal LLMs when conducting embodied decision making tasks, specifically focusing on perception, cognition, and action.

Release

  • [2024.02.15] PCA-Bench-V1 is released. We release the open and closed track data in huggingface. We also set an online leaderboard accepting users' submission.
  • [2023.12.15] PCA-EVAL is accepted to Foundation Model for Decision Making Workshop @NeurIPS 2023. PCA-Evaluation tool is released in github.

Leaderboard

Leaderboard with Full Metrics

Submit Results

📢 For close track evaluaiton and PCA-Evaluation, please follow this file to organize your model output. Submit Six JSON files from different domains and different tracks, along with your model name and organization to us via email. Ensure you use the dataset's provided prompt as the default input for fair comparison.

We will send the PCA-Eval results of your model to you and update the leaderboard.

We provide sample code to get the six json files. User only needs to add your model inference code:

# Sample code for PCA-Eval
from datasets import load_dataset
from tqdm import tqdm
import json
import os

def YOUR_INFERENCE_CODE(prompt,image):
    """Simple single round multimodal conversation call.
    """
    response = YOUR_MODEL.inference(prompt,image)
    return response

output_path = "./Results-DIR-PATH/"
os.mkdir(output_path)

dataset_ad = load_dataset("PCA-Bench/PCA-Bench-V1","Autonomous Driving")
dataset_dr = load_dataset("PCA-Bench/PCA-Bench-V1","Domestic Robot")
dataset_og = load_dataset("PCA-Bench/PCA-Bench-V1","Open-World Game")

test_dataset_dict = {"Autonomous-Driving":dataset_ad,"Domestic-Robot":dataset_dr,"Open-World-Game":dataset_og}
test_split = ["test_closed","test_open"]
test_domain = list(test_dataset_dict.keys())

for domain in test_domain:
  for split in test_split:
    print("testing on %s:%s"%(domain,split))

    prediction_results = []
    output_filename = output_path+"%s-%s.json"%(domain,split)
    prompts = test_dataset_dict[domain][split]['question_prompt']
    images = test_dataset_dict[domain][split]['image']

    for prompt_id in tqdm(range(len(prompts))):
        user_inputs = prompts[prompt_id] # do not change the prompts for fair comparison
        index = prompt_id
        image = images[prompt_id]

        outputs = YOUR_INFERENCE_CODE(user_inputs,image)

        prediction_results.append({
            'prompt': user_inputs,
            'model_output': outputs,
            'index': index,
        })

    with open(output_filename, 'w') as f:
        json.dump(prediction_results, f, indent=4)

# submit the 6 json files in the output_path to our email

You could also simply compute the multiple-choice accuracy locally as a comparison metric in your own experiments. However, in the online leaderboard, we only consider the average action score and Genuine PCA score when ranking models.

For more information, refer to the offical github repo