Update README.md
Browse files# Synthetic OCR Correction Dataset 🤗
This dataset is a synthetic dataset generated for post-OCR correction tasks. It contains over 2,000,000 rows of French text pairs and follows the Croissant format. It is designed to train small language models (LLMs) for text correction.
## Descriptions
To ensure the dataset closely resembles OCR-malformed texts, we applied various transformations randomly. This approach helps avoid the LLM identifying specific patterns and encourages it to select the correct word based on the surrounding context. Below are some of the transformations applied :
```bash
alterations = [
lambda x: re.sub(r'[aeiouAEIOU]', '', x), # Remove vowels
lambda x: re.sub(r'\s+', ' ', x), # Replace multiple spaces with a single space
lambda x: re.sub(r'\b\w\b', '', x), # Remove single letters
lambda x: re.sub(r'[^\w\s]', '', x), # Remove punctuation
lambda x: ''.join(random.choice((char, '')) for char in x), # Randomly drop characters
lambda x: ' '.join([
''.join(random.sample(word, len(word))) if random.random() < 0.53 else word # Scramble word with 50% probability
for word in x.split()
]) # Randomly scramble words
]
```
In addition to the transformations listed above, we also modify punctuation, add words, and create repetitions. Feel free to customize these transformations as needed.
## Recipe
Each word in the text has a 53% chance of being selected for alteration.
A random number of alterations is applied to the selected words.
In this release, between 10% and 50% of the words in each text can be altered.
We welcome community feedback to improve this dataset further. Please feel free to share your thoughts or suggestions! 🤗
@@ -1,20 +1,29 @@
|
|
1 |
-
---
|
2 |
-
license: apache-2.0
|
3 |
-
dataset_info:
|
4 |
-
features:
|
5 |
-
- name: input
|
6 |
-
dtype: string
|
7 |
-
- name: output
|
8 |
-
dtype: string
|
9 |
-
splits:
|
10 |
-
- name: train
|
11 |
-
num_bytes: 8975568695
|
12 |
-
num_examples: 2101890
|
13 |
-
download_size: 4055555318
|
14 |
-
dataset_size: 8975568695
|
15 |
-
configs:
|
16 |
-
- config_name: default
|
17 |
-
data_files:
|
18 |
-
- split: train
|
19 |
-
path: data/train-*
|
20 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
license: apache-2.0
|
3 |
+
dataset_info:
|
4 |
+
features:
|
5 |
+
- name: input
|
6 |
+
dtype: string
|
7 |
+
- name: output
|
8 |
+
dtype: string
|
9 |
+
splits:
|
10 |
+
- name: train
|
11 |
+
num_bytes: 8975568695
|
12 |
+
num_examples: 2101890
|
13 |
+
download_size: 4055555318
|
14 |
+
dataset_size: 8975568695
|
15 |
+
configs:
|
16 |
+
- config_name: default
|
17 |
+
data_files:
|
18 |
+
- split: train
|
19 |
+
path: data/train-*
|
20 |
+
task_categories:
|
21 |
+
- text2text-generation
|
22 |
+
language:
|
23 |
+
- fr
|
24 |
+
tags:
|
25 |
+
- ocr
|
26 |
+
- postocr
|
27 |
+
size_categories:
|
28 |
+
- 1M<n<10M
|
29 |
+
---
|