Datasets:
Update README.md
Browse files
README.md
CHANGED
@@ -25,7 +25,7 @@ configs:
|
|
25 |
---
|
26 |
# Dataset Card for WebUI tokens (unlabelled)
|
27 |
|
28 |
-
Every token over
|
29 |
|
30 |
- **Curated by:** [Gary Benson](https://gbenson.net/)
|
31 |
- **License:** [CC0 1.0 Universal](https://creativecommons.org/publicdomain/zero/1.0/)
|
@@ -37,4 +37,4 @@ I'm using it to develop a [DOM-aware tokenizer](https://github.com/gbenson/dom-t
|
|
37 |
## Bias, Risks, and Limitations
|
38 |
|
39 |
- 87% of the source dataset was English language websites, with no other language exceeding 2% of the total
|
40 |
-
-
|
|
|
25 |
---
|
26 |
# Dataset Card for WebUI tokens (unlabelled)
|
27 |
|
28 |
+
Every token over 5 characters long from [`gbenson/webui-dom-snapshots`](https://huggingface.co/datasets/gbenson/webui-dom-snapshots).
|
29 |
|
30 |
- **Curated by:** [Gary Benson](https://gbenson.net/)
|
31 |
- **License:** [CC0 1.0 Universal](https://creativecommons.org/publicdomain/zero/1.0/)
|
|
|
37 |
## Bias, Risks, and Limitations
|
38 |
|
39 |
- 87% of the source dataset was English language websites, with no other language exceeding 2% of the total
|
40 |
+
- Non-ASCII tokens have been coerced to ASCII using [Unidecode](https://pypi.org/project/Unidecode/) where the result appears visually similar
|