MMFMChallenge / SUBMISSION_DESC.md
helloai0's picture
Update SUBMISSION_DESC.md
c0f8787 verified
|
raw
history blame
No virus
4.8 kB

Registration

Please register the MMFM Challenge via MMFM Challenge CMT and select the challenge track. The full challenge submission instruction refers to MMFM Submission.

Submission

Here is the instruction for submitting results to the online evaluation and leaderboard.

  • Please login the Huggingface competition before submiting your generated answers.

Phase 1

  • For this phase, train and test sets from 10 open-source datasets (DocVQA, InfographicsVQA, Websrc, WTQ, IconQA Fill in the blank, FunSD, IconQA Choose text, WildReceipt, TextbookQA and TabFact) are provided.
  • There are 2000 questions in the test set (200 questions per dataset).

Phase 2

  • An alien test set consisting of 3 private datasets (mydoc, mychart, myinfographic) is released. We encourage the participants to read our GitHub Readme.
  • There are 1028 questions in the test set (400 questions for mydoc, 200 questions for mychart and 428 questions for myinfographic).

Submission

  • Submission procedure
    • First log in to the Hugging Face space by clicking the blue button "Login with Hugging Face".
    • Click the "New Submission" button and upload the submission file. The submission file is a .csv file in the following structure (the headings id,pred,split cannot be changed). The submission file should contain predictions for all the 3028 questions from the 13 datasets.
      A dummy submission file can be downloaded from here and used as a template where you find the sample ids for the 3028 questions. Submission with missing samples will lead to an error. We recommend using the dummy submission file as a template to check whether all sample ids are included.
    • Then click on the "Submit" button to submit the file. "Success! You have xx submissions remaining today." indicates that the submission is successful.
    • Click on "Cancel" to go back to the submission page and click on the "Logs" button (at the top panel) to check the evaluation status.
    • The evaluation process on the 3028 samples takes around 10 minutes.
  • Solution to potential errors
    • "Invalid Token" error: Please first log in to the Hugging Face space by clicking the blue button "Login with Hugging Face" before the submission.
    • "404" error: A potential solution to 404 error is to use the Chrome browser.
    • "Error: missing sample ids in the submission file: " in log: Please check the submission file and make sure all the sample ids are included.
id,pred,split
docvqa_docvqa_0_4,dummy,public
...
infographicvqa_infographicvqa_1_0_2,dummy,public
...
websrc_websrc_1_0_213,dummy,public
...
wtq_wtq_1_1_0,dummy,public
...
iconqa_fill_in_blank_iconqa_1_13,dummy,public
...
funsd_funsd_1_0_27_26_2,dummy,public
...
iconqa_choose_txt_iconqa_1_4,dummy,public
...
wildreceipt_wildreceipt_1_3_73_73_7,dummy,public
...
textbookqa_textbookqa_1_10_2_3_5,dummy,public
...
tabfact_tabfact_1_0_9,dummy,public
...
mydoc_5227,dummy,public
...
mychart_396_vbar,dummy,public
...
myinfographic_5,dummy,public
...

where the 3 columns are sample id, prediction and split (set to public as default).
The sample id format for the 10 phase-1 datasets are:

DocVQA: docvqa_docvqa_x_x  
InfographicsVQA: infographicvqa_infographicvqa_x_x_x  
WebSrc: websrc_websrc_x_x_x  
WTQ: wtq_wtq_x_x_x  
IconQA Fill in the blank: iconqa_fill_in_blank_iconqa_x_x  
FunSD: funsd_funsd_x_x_x_x_x  
IconQA Choose text: iconqa_choose_txt_iconqa_x_x  
WildReceipt: wildreceipt_wildreceipt_x_x_x_x_x  
TextbookQA: textbookqa_textbookqa_x_x_x_x_x  
TabFact: tabfact_tabfact_x_x_x

The sample id consists of 3 components: dataset name, sub dataset name and the original sample id.
The sub dataset name is the same as dataset name except for iconqa_fill_in_blank_iconqa and iconqa_choose_txt_iconqa
Here x_x is the original sample id given in the processed_data/dataset_name/converted_output_test.json file.

The sample id format for the 3 phase-2 datasets are e.g.:

mydoc: mydoc_5227
mychart: mychart_396_vbar
myinfographic: myinfographic_5

Here the sample ids are the same as the ones given in the annotation files (annot_wo_answer.json) downloaded from the Google Drive on GitHub Readme.