Datasets:
metadata
license: cc0-1.0
size_categories:
- 1M<n<10M
source_datasets:
- gbenson/webui-dom-snapshots
task_categories:
- text-classification
pretty_name: WebUI tokens (unlabelled)
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 71634460
num_examples: 1616815
download_size: 69704391
dataset_size: 71634460
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
Dataset Card for WebUI tokens (unlabelled)
Every token over 5 characters long from gbenson/webui-dom-snapshots
.
- Curated by: Gary Benson
- License: CC0 1.0 Universal
Uses
I'm using it to develop a DOM-aware tokenizer for HTML.
Bias, Risks, and Limitations
- 87% of the source dataset was English language websites, with no other language exceeding 2% of the total
- Non-ASCII tokens have been coerced to ASCII using Unidecode where the result appears visually similar