Datasets:
metadata
license: mit
task_categories:
- text2text-generation
language:
- en
- zh
configs:
- config_name: chatgpt-2000
default: true
data_files:
- split: train
path: chatgpt-train-2000.jsonl
- split: test
path: chatgpt-test.jsonl
- config_name: chatgpt-8192
data_files:
- split: train
path: chatgpt-train-8192.jsonl
- split: test
path: chatgpt-test-8192.jsonl
Introduction
This repository holds the data file for translating TechLinked, which talks about mostly technology and science news.
Raw data is in the data/ folder. Scripts generate OpenAI's ChatCompletion Fine-tuning API formatted training data in jsonl
format.
-2000
variants are designed to be used with GPT-3 with 8192 tokens context length limit. -8192
variants are designed to be used with GPT-4o mini with 128000 context window and 16384 max output tokens.
How to add data to this repository
pip install ass
- Convert ASS file into
.en.txt
and.cn.txt
files:python ./ass_extract.py [ASS Filename]
This step will generate two files:Extracted - [Filename].en.txt
andExtracted - [Filename].cn.txt
- Move them into the
data/
folder. You may want to rename them also, but keep their filenames the same except.en
and.cn
. - Run script:
python ./generate_chatgpt_varlen data --maxlen MAXLEN --test-ratio TEST_RATIO
data
is the data directory.MAXLEN
is recommended to be a quarter of the context window, or a little bit less than maximum output tokens, whichever is smaller.TEST_RATIO
is the ratio of data to be reserved for testing. A decimal number. Default is 0.2 This will generate three files:combined-{MAXLEN}.jsonl
: Test+Train data.chatgpt-train-{MAXLEN}.jsonl
: Train data.chatgpt-test-{MAXLEN}.jsonl
: Test data.
The other scripts are deprecated.