Datasets:

ArXiv:
License:
e9t commited on
Commit
abb829c
1 Parent(s): 5b55dd9

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +7 -4
README.md CHANGED
@@ -183,15 +183,18 @@ $ git clone https://huggingface.co/datasets/upstage/dp-bench.git
183
  $ cd dp-bench
184
  $ pip install -r requirements.txt
185
  ```
186
- The repository includes necessary scripts for performing inference and evaluation of document parsers.
187
 
188
- ### Dataset
 
 
 
189
  The benchmark dataset can be found in the `dataset` folder.
190
  It contains a wide range of document layouts, from text-heavy pages to complex tables, enabling a thorough evaluation of the parser’s performance.
191
  The dataset comes with annotations for layout elements such as paragraphs, headings, and tables.
192
 
193
 
194
- ### Element detection and serialization evaluation
195
  This evaluation will compute the NID metric to assess how accurately the text in the document is recognized considering the structure and order of the document layout.
196
  To evaluate the document layout results, run the following command:
197
 
@@ -203,7 +206,7 @@ $ python evaluate.py \
203
  ```
204
 
205
 
206
- ### Table structure recognition evaluation
207
  This will compute TEDS-S (structural accuracy) and TEDS (structural and textual accuracy).
208
  To evaluate table recognition performance, use the following command:
209
 
 
183
  $ cd dp-bench
184
  $ pip install -r requirements.txt
185
  ```
186
+ The repository includes necessary scripts for inference and evaluation, as described in the following sections.
187
 
188
+ ### Inference
189
+ TODO
190
+
191
+ ### Evaluation
192
  The benchmark dataset can be found in the `dataset` folder.
193
  It contains a wide range of document layouts, from text-heavy pages to complex tables, enabling a thorough evaluation of the parser’s performance.
194
  The dataset comes with annotations for layout elements such as paragraphs, headings, and tables.
195
 
196
 
197
+ #### Element detection and serialization evaluation
198
  This evaluation will compute the NID metric to assess how accurately the text in the document is recognized considering the structure and order of the document layout.
199
  To evaluate the document layout results, run the following command:
200
 
 
206
  ```
207
 
208
 
209
+ #### Table structure recognition evaluation
210
  This will compute TEDS-S (structural accuracy) and TEDS (structural and textual accuracy).
211
  To evaluate table recognition performance, use the following command:
212