Datasets:

ArXiv:
License:
shinseung428 commited on
Commit
2bacaa3
1 Parent(s): ab4b07b

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +4 -1
README.md CHANGED
@@ -179,7 +179,7 @@ The dataset’s structure supports flexible representation of layout classes and
179
 
180
  ### Setup
181
 
182
- Before setting up the environment, make sure to [install Git LFS](https://git-lfs.com/), which is required for handling large files.
183
  Once installed, you can clone the repository and install the necessary dependencies by running the following commands:
184
 
185
  ```
@@ -198,6 +198,9 @@ The benchmark dataset can be found in the `dataset` folder.
198
  It contains a wide range of document layouts, from text-heavy pages to complex tables, enabling a thorough evaluation of the parser’s performance.
199
  The dataset comes with annotations for layout elements such as paragraphs, headings, and tables.
200
 
 
 
 
201
 
202
  #### Element detection and serialization evaluation
203
  This evaluation will compute the NID metric to assess how accurately the text in the document is recognized considering the structure and order of the document layout.
 
179
 
180
  ### Setup
181
 
182
+ Before setting up the environment, **make sure to [install Git LFS](https://git-lfs.com/)**, which is required for handling large files.
183
  Once installed, you can clone the repository and install the necessary dependencies by running the following commands:
184
 
185
  ```
 
198
  It contains a wide range of document layouts, from text-heavy pages to complex tables, enabling a thorough evaluation of the parser’s performance.
199
  The dataset comes with annotations for layout elements such as paragraphs, headings, and tables.
200
 
201
+ The following options are required for evaluation:
202
+ - **`--ref_path`**: Specifies the path to the reference JSON file, predefined as `data/reference.json` for evaluation purposes.
203
+ - **`--pred_path`**: Indicates the path to the predicted JSON file. You can either use a sample result located in the `dataset/sample_results` folder, or generate your own by using the inference script provided in the `scripts` folder.
204
 
205
  #### Element detection and serialization evaluation
206
  This evaluation will compute the NID metric to assess how accurately the text in the document is recognized considering the structure and order of the document layout.