Update README.md
Browse files
README.md
CHANGED
@@ -64,9 +64,31 @@ dataset_info:
|
|
64 |
- data was cleaned/normalized with the goal of removing "model specific APIs" like the "--ar" for Midjourney and so on
|
65 |
- data de-duplicated on a basic level: exactly duplicate prompts were dropped (_after cleaning and normalization_)
|
66 |
|
|
|
|
|
|
|
|
|
67 |
|
68 |
## contents
|
69 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
70 |
```
|
71 |
DatasetDict({
|
72 |
train: Dataset({
|
|
|
64 |
- data was cleaned/normalized with the goal of removing "model specific APIs" like the "--ar" for Midjourney and so on
|
65 |
- data de-duplicated on a basic level: exactly duplicate prompts were dropped (_after cleaning and normalization_)
|
66 |
|
67 |
+
## updates
|
68 |
+
|
69 |
+
- Oct 2023: the `default` config has been updated with better deduplication. It was deduplicated with minhash (_params: n-gram size set to 3, deduplication threshold at 0.6, hash function chosen as xxh3 with 32-bit hash bits, and 128 permutations with a batch size of 10,000._) which drops 2+ million rows.
|
70 |
+
- original version is still available under `config_name="original"`
|
71 |
|
72 |
## contents
|
73 |
|
74 |
+
|
75 |
+
|
76 |
+
default:
|
77 |
+
|
78 |
+
```
|
79 |
+
DatasetDict({
|
80 |
+
train: Dataset({
|
81 |
+
features: ['text', 'src_dataset'],
|
82 |
+
num_rows: 1677221
|
83 |
+
})
|
84 |
+
test: Dataset({
|
85 |
+
features: ['text', 'src_dataset'],
|
86 |
+
num_rows: 292876
|
87 |
+
})
|
88 |
+
})
|
89 |
+
```
|
90 |
+
|
91 |
+
For `original` config:
|
92 |
```
|
93 |
DatasetDict({
|
94 |
train: Dataset({
|