shinseung428
commited on
Commit
•
4570b32
1
Parent(s):
409cd01
Update README.md
Browse files
README.md
CHANGED
@@ -33,7 +33,7 @@ $$
|
|
33 |
NID = 1 - \frac{\text{distance}}{\text{len(reference)} + \text{len(prediction)}}
|
34 |
$$
|
35 |
|
36 |
-
The distance measures the similarity between the reference and predicted text, with values ranging from 0 to 1, where 0 represents perfect alignment and 1 denotes complete dissimilarity.
|
37 |
Here, the predicted text is compared against the reference text to determine how many character-level insertions and deletions are needed to match it.
|
38 |
A higher NID score reflects better performance in both recognizing and ordering the text within the document's detected layout regions.
|
39 |
|
@@ -53,12 +53,13 @@ $$
|
|
53 |
|
54 |
The equation evaluates the similarity between two tables by modeling them as tree structures \\(T_a\\) and \\(T_b\\).
|
55 |
This metric evaluates how accurately the table structure is predicted, including the content of each cell.
|
56 |
-
A higher TEDS score indicates better overall performance in capturing both the table
|
57 |
|
58 |
**TEDS-S (Tree Edit Distance-based Similarity-Struct).**
|
59 |
TEDS-S stands for Tree Edit Distance-based Similarity-Struct, measuring the structural similarity between the predicted and reference tables.
|
60 |
-
While the metric formulation is identical to TEDS, it uses modified tree representations, denoted as \\(T_a'\\) and \\(T_b'\\), where the nodes correspond solely to the table structure, omitting
|
61 |
-
This allows TEDS-S to concentrate on assessing the structural similarity of the tables, such as row and column alignment, without being influenced by the
|
|
|
62 |
## Benchmark dataset
|
63 |
|
64 |
### Document sources
|
@@ -80,7 +81,9 @@ Together, these sources provide a broad and specialized range of information.
|
|
80 |
|
81 |
While works like [ReadingBank](https://github.com/doc-analysis/ReadingBank) often focus solely on text conversion in document parsing, we have taken a more detailed approach by dividing the document into specific elements, with a particular emphasis on table performance.
|
82 |
|
83 |
-
This benchmark dataset was created by extracting pages with various layout elements from multiple types of documents.
|
|
|
|
|
84 |
|
85 |
Note that only Heading1 is included among various heading sizes because it represents the main structural divisions in most documents, serving as the primary section title.
|
86 |
This high-level segmentation is sufficient for assessing the core structure without adding unnecessary complexity.
|
@@ -110,7 +113,8 @@ Detailed heading levels like Heading2 and Heading3 are omitted to keep the evalu
|
|
110 |
The dataset is in JSON format, representing elements extracted from a PDF file, with each element defined by its position, layout class, and content.
|
111 |
The **category** field represents various layout classes, including but not limited to text regions, headings, footers, captions, tables, and more.
|
112 |
The **content** field has three options: the **text** field contains text-based content, **html** represents layout regions where equations are in LaTeX and tables in HTML, and **markdown** distinguishes between regions like Heading1 and other text-based regions such as paragraphs, captions, and footers.
|
113 |
-
Each element includes coordinates (x, y), a unique ID, and the page number it appears on.
|
|
|
114 |
|
115 |
```
|
116 |
{
|
|
|
33 |
NID = 1 - \frac{\text{distance}}{\text{len(reference)} + \text{len(prediction)}}
|
34 |
$$
|
35 |
|
36 |
+
The normalized distance in the equation measures the similarity between the reference and predicted text, with values ranging from 0 to 1, where 0 represents perfect alignment and 1 denotes complete dissimilarity.
|
37 |
Here, the predicted text is compared against the reference text to determine how many character-level insertions and deletions are needed to match it.
|
38 |
A higher NID score reflects better performance in both recognizing and ordering the text within the document's detected layout regions.
|
39 |
|
|
|
53 |
|
54 |
The equation evaluates the similarity between two tables by modeling them as tree structures \\(T_a\\) and \\(T_b\\).
|
55 |
This metric evaluates how accurately the table structure is predicted, including the content of each cell.
|
56 |
+
A higher TEDS score indicates better overall performance in capturing both the table structure and the content of each cell.
|
57 |
|
58 |
**TEDS-S (Tree Edit Distance-based Similarity-Struct).**
|
59 |
TEDS-S stands for Tree Edit Distance-based Similarity-Struct, measuring the structural similarity between the predicted and reference tables.
|
60 |
+
While the metric formulation is identical to TEDS, it uses modified tree representations, denoted as \\(T_a'\\) and \\(T_b'\\), where the nodes correspond solely to the table structure, omitting the content of each cell.
|
61 |
+
This allows TEDS-S to concentrate on assessing the structural similarity of the tables, such as row and column alignment, without being influenced by the contents within the cells.
|
62 |
+
|
63 |
## Benchmark dataset
|
64 |
|
65 |
### Document sources
|
|
|
81 |
|
82 |
While works like [ReadingBank](https://github.com/doc-analysis/ReadingBank) often focus solely on text conversion in document parsing, we have taken a more detailed approach by dividing the document into specific elements, with a particular emphasis on table performance.
|
83 |
|
84 |
+
This benchmark dataset was created by extracting pages with various layout elements from multiple types of documents.
|
85 |
+
The layout elements consist of 12 element types: **Table, Paragraph, Figure, Chart, Header, Footer, Caption, Equation, Heading1, List, Index, Footnote**.
|
86 |
+
This diverse set of layout elements ensures that our evaluation covers a wide range of document structures and complexities, providing a comprehensive assessment of document parsing capabilities.
|
87 |
|
88 |
Note that only Heading1 is included among various heading sizes because it represents the main structural divisions in most documents, serving as the primary section title.
|
89 |
This high-level segmentation is sufficient for assessing the core structure without adding unnecessary complexity.
|
|
|
113 |
The dataset is in JSON format, representing elements extracted from a PDF file, with each element defined by its position, layout class, and content.
|
114 |
The **category** field represents various layout classes, including but not limited to text regions, headings, footers, captions, tables, and more.
|
115 |
The **content** field has three options: the **text** field contains text-based content, **html** represents layout regions where equations are in LaTeX and tables in HTML, and **markdown** distinguishes between regions like Heading1 and other text-based regions such as paragraphs, captions, and footers.
|
116 |
+
Each element includes coordinates (x, y), a unique ID, and the page number it appears on.
|
117 |
+
The dataset’s structure supports flexible representation of layout classes and content formats for document parsing.
|
118 |
|
119 |
```
|
120 |
{
|