Datasets:
File size: 1,246 Bytes
3236c16 229333c 237d767 12e7591 f6dd575 12e7591 253cff9 12e7591 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 |
---
dataset_info:
features:
- name: id
dtype: string
- name: ja
dtype: string
- name: en
dtype: string
splits:
- name: train
num_bytes: 17758809
num_examples: 147876
download_size: 10012915
dataset_size: 17758809
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
license: cc-by-4.0
task_categories:
- translation
language:
- ja
- en
pretty_name: tanaka-corpus
size_categories:
- 100K<n<1M
---
HF Datasets version of [Tanaka Corpus](http://www.edrdg.org/wiki/index.php/Tanaka_Corpus).
## Preprocess for HF Datasets
以下の内容でオリジナルデータを前処理しました。
```bash
wget ftp://ftp.edrdg.org/pub/Nihongo/examples.utf.gz
gunzip examples.utf.gz
```
```python
import re
from pathlib import Path
from more_itertools import chunked
import datasets as ds
data = []
with Path("examples.utf").open() as f:
for row, _ in chunked(f, 2):
ja, en, idx = re.findall(r"A: (.*?)\t(.*?)#ID=(.*$)", row)[0]
data.append(
{
"id": idx,
"ja": ja.strip(),
"en": en.strip(),
}
)
dataset = ds.Dataset.from_list(data)
dataset.push_to_hub("hpprc/tanaka-corpus")
``` |