Datasets:
Update data
Browse files- README.md +12 -14
- chatgpt-test-2000.jsonl +3 -0
- chatgpt-test-8192.jsonl +3 -0
- chatgpt-train-2000.jsonl +3 -0
- chatgpt-train-8192.jsonl +3 -0
- combined-2000.jsonl +3 -0
- combined-8192.jsonl +3 -0
- data/2023-10-18 China lost its GPU privileges [Wz9JoTTRa4k].cn.txt +9 -10
- data/2023-10-18 China lost its GPU privileges [Wz9JoTTRa4k].en.txt +2 -2
- data/2023-10-25 Windows' Apple Silicon Moment is Coming [xFvGERpRUpM].cn.txt +2 -2
- data/2023-10-28 Google Pays Apple EVERY YEAR [qvZAYJLmzuk].cn.txt +1 -1
- generate_chatgpt_varlen.py +214 -0
README.md
CHANGED
@@ -5,26 +5,24 @@ task_categories:
|
|
5 |
language:
|
6 |
- en
|
7 |
- zh
|
8 |
-
size_categories:
|
9 |
-
- n<1K
|
10 |
configs:
|
11 |
-
- config_name:
|
12 |
default: true
|
13 |
data_files:
|
14 |
- split: train
|
15 |
-
path: "train.
|
16 |
-
- split: test
|
17 |
-
path: "test.json"
|
18 |
-
- config_name: chatgpt
|
19 |
-
data_files:
|
20 |
-
- split: train
|
21 |
-
path: "chatgpt-train.jsonl"
|
22 |
- split: test
|
23 |
path: "chatgpt-test.jsonl"
|
24 |
-
- config_name: chatgpt-
|
25 |
data_files:
|
26 |
- split: train
|
27 |
-
path: "chatgpt-
|
28 |
- split: test
|
29 |
-
path: "chatgpt-
|
30 |
-
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
5 |
language:
|
6 |
- en
|
7 |
- zh
|
|
|
|
|
8 |
configs:
|
9 |
+
- config_name: chatgpt-2000
|
10 |
default: true
|
11 |
data_files:
|
12 |
- split: train
|
13 |
+
path: "chatgpt-train-2000.jsonl"
|
|
|
|
|
|
|
|
|
|
|
|
|
14 |
- split: test
|
15 |
path: "chatgpt-test.jsonl"
|
16 |
+
- config_name: chatgpt-8192
|
17 |
data_files:
|
18 |
- split: train
|
19 |
+
path: "chatgpt-train-8192.jsonl"
|
20 |
- split: test
|
21 |
+
path: "chatgpt-test-8192.jsonl"
|
22 |
+
---
|
23 |
+
|
24 |
+
This repository holds the data file for translating TechLinked, which talks about mostly technology and science news.
|
25 |
+
|
26 |
+
Raw data is in the data/ folder. Scripts generate OpenAI's ChatCompletion Fine-tuning API formatted training data in `jsonl` format.
|
27 |
+
|
28 |
+
`-2000` variants are designed to be used with GPT-3 with 8192 tokens context length limit. `-8192` variants are designed to be used with GPT-4o mini with 128000 context window and 16384 max output tokens.
|
chatgpt-test-2000.jsonl
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:e5b2b51eef25ded33a5060bf717dd024285d32d140bebe146f808500a6655e92
|
3 |
+
size 142305
|
chatgpt-test-8192.jsonl
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:07873a323dea01f082dab4e695d24f87dac2471142863c5e2c1167d2593ea2be
|
3 |
+
size 133630
|
chatgpt-train-2000.jsonl
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:b18ee356a6301b7912f142227a389bdcfaf55956814473abb817689f80fb17b9
|
3 |
+
size 594567
|
chatgpt-train-8192.jsonl
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:832d3d0ed431f4fb73f59e2aab560706553f3813045f5e17dc54b1396f5f88a6
|
3 |
+
size 549979
|
combined-2000.jsonl
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:f11f3da6c70be783b0110e9dd64a28a216937cfc393ecf651e90163be8863f81
|
3 |
+
size 736869
|
combined-8192.jsonl
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:0aaddc2f5fb4a2eb2d5c6a9a40bea8ab92f1333ffb0ebc2445bb6ce265ce50e5
|
3 |
+
size 683606
|
data/2023-10-18 China lost its GPU privileges [Wz9JoTTRa4k].cn.txt
CHANGED
@@ -110,8 +110,8 @@
|
|
110 |
[292.04] 就算是编辑 他也没法切掉我 [293.8]
|
111 |
[293.88] 我要继续下去 [294.28]
|
112 |
[294.36] 纽约的立法者们提出了一项新法案 [296.64]
|
113 |
-
[296.64] 要求购买任何可以理论上打印出全套或部分火器的3D打印机 [298.
|
114 |
-
[298.62] 都需要进行刑事背景调查 [303.
|
115 |
[303.92] 不巧的是 符合这一条件的打印机占了大多数 [306.26]
|
116 |
|
117 |
[306.42] Gun Digest推荐Creality Ender 3 V2 [310.1]
|
@@ -126,8 +126,8 @@
|
|
126 |
[328.92] AMD发布了一个新的beta显卡驱动程序 [330.82]
|
127 |
[331.06] 该驱动禁用了Radeon Anti-Lag Plus技术 [333.7]
|
128 |
|
129 |
-
[333.7] 在该功能及其前身 不带加号的Anti-Lag此前导致了 [338.
|
130 |
-
[338.58] CS2、Apex Legends 和COD的玩家封禁风波 [343.
|
131 |
[343.68] 不幸的是 [344.24]
|
132 |
[344.76] AMD建议使用Anti-Lag技术以减少AFMF造成的延迟 [349.46]
|
133 |
[349.86] 在将其在所有DX11游戏中启用后 [352.68]
|
@@ -159,12 +159,11 @@
|
|
159 |
[422.58] 当时IRS与税务代理合作 [425.4]
|
160 |
|
161 |
[425.4] 为那些纳税申报相对简单的纳税人 提供了免费的替代方案 [429.94]
|
162 |
-
[430.38] 然后这些公司将这些免费替代方案 [
|
163 |
-
[432.62] 变得比找早起理由还要困难 [434.94]
|
164 |
[435.34] 或者说找晚上上床的理由 [437.04]
|
165 |
[437.16] 为什么这两个都是? [437.94]
|
166 |
[441.0] 当IRS禁止他们这么做时 [442.74]
|
167 |
-
[443.02] 万豪和Intuit(TurboTax的制造商) [446.
|
168 |
[446.44] 离开了这个协议 [447.16]
|
169 |
[447.52] Intuit已经游说反对直接申报十多年了 [451.16]
|
170 |
|
@@ -172,9 +171,9 @@
|
|
172 |
[453.96] 的确有 但你永远找不到 [456.04]
|
173 |
[456.32] 那我就继续熬夜了 [458.0]
|
174 |
[458.16] Slimbook和Fedora项目 [459.46]
|
175 |
-
[459.46] 宣布推出了一款针对Linux优化的新款笔记本电脑
|
176 |
-
[
|
177 |
-
[
|
178 |
[465.46] 是一个铁骨铮铮的私家侦探 [467.02]
|
179 |
[467.02] 他还是Marlboro的广告牌上的烟枪牛仔 yee-haw! [469.52]
|
180 |
|
|
|
110 |
[292.04] 就算是编辑 他也没法切掉我 [293.8]
|
111 |
[293.88] 我要继续下去 [294.28]
|
112 |
[294.36] 纽约的立法者们提出了一项新法案 [296.64]
|
113 |
+
[296.64] 要求购买任何可以理论上打印出全套或部分火器的3D打印机 [298.56]
|
114 |
+
[298.62] 都需要进行刑事背景调查 [303.5]
|
115 |
[303.92] 不巧的是 符合这一条件的打印机占了大多数 [306.26]
|
116 |
|
117 |
[306.42] Gun Digest推荐Creality Ender 3 V2 [310.1]
|
|
|
126 |
[328.92] AMD发布了一个新的beta显卡驱动程序 [330.82]
|
127 |
[331.06] 该驱动禁用了Radeon Anti-Lag Plus技术 [333.7]
|
128 |
|
129 |
+
[333.7] 在该功能及其前身 不带加号的Anti-Lag此前导致了 [338.58]
|
130 |
+
[338.58] CS2、Apex Legends 和COD的玩家封禁风波 [343.3]
|
131 |
[343.68] 不幸的是 [344.24]
|
132 |
[344.76] AMD建议使用Anti-Lag技术以减少AFMF造成的延迟 [349.46]
|
133 |
[349.86] 在将其在所有DX11游戏中启用后 [352.68]
|
|
|
159 |
[422.58] 当时IRS与税务代理合作 [425.4]
|
160 |
|
161 |
[425.4] 为那些纳税申报相对简单的纳税人 提供了免费的替代方案 [429.94]
|
162 |
+
[430.38] 然后这些公司将这些免费替代方案 变得比找早起理由还要困难 [434.94]
|
|
|
163 |
[435.34] 或者说找晚上上床的理由 [437.04]
|
164 |
[437.16] 为什么这两个都是? [437.94]
|
165 |
[441.0] 当IRS禁止他们这么做时 [442.74]
|
166 |
+
[443.02] 万豪和Intuit(TurboTax的制造商) [446.08]
|
167 |
[446.44] 离开了这个协议 [447.16]
|
168 |
[447.52] Intuit已经游说反对直接申报十多年了 [451.16]
|
169 |
|
|
|
171 |
[453.96] 的确有 但你永远找不到 [456.04]
|
172 |
[456.32] 那我就继续熬夜了 [458.0]
|
173 |
[458.16] Slimbook和Fedora项目 [459.46]
|
174 |
+
[459.46] 宣布推出了一款针对Linux优化的新款笔记本电脑 [461.64]
|
175 |
+
[462.08] Fedora Slimbook [463.32]
|
176 |
+
[463.78] 名字来源于 Fedora Slimbook [465.24]
|
177 |
[465.46] 是一个铁骨铮铮的私家侦探 [467.02]
|
178 |
[467.02] 他还是Marlboro的广告牌上的烟枪牛仔 yee-haw! [469.52]
|
179 |
|
data/2023-10-18 China lost its GPU privileges [Wz9JoTTRa4k].en.txt
CHANGED
@@ -47,7 +47,7 @@
|
|
47 |
[107.5] without running into walls. [108.92]
|
48 |
[109.14] But that's half the fun. [110.36]
|
49 |
[110.52] While that's obviously fine if you're just taking your $500 Skull accessory on a walk through the park, [115.42]
|
50 |
-
[115.66] it raises obvious privacy concerns [117.
|
51 |
[117.64] when random tech bros are exploring the wonder of AR by recording banal interactions [122.38]
|
52 |
[122.38] with identifiable service workers [123.94]
|
53 |
[123.94] and posting it on the internet. [125.26]
|
@@ -168,7 +168,7 @@
|
|
168 |
|
169 |
[425.4] to offer taxpayers with simple returns [427.72]
|
170 |
[427.72] a free alternative for filing their taxes. [429.94]
|
171 |
-
[430.38] Those companies then made those free alternatives
|
172 |
[435.34] or a reason to go to bed at night. [437.04]
|
173 |
[437.16] Why is it both? [437.94]
|
174 |
[441.0] When the IRS banned them from doing that, [442.74]
|
|
|
47 |
[107.5] without running into walls. [108.92]
|
48 |
[109.14] But that's half the fun. [110.36]
|
49 |
[110.52] While that's obviously fine if you're just taking your $500 Skull accessory on a walk through the park, [115.42]
|
50 |
+
[115.66] it raises obvious privacy concerns [117.55]
|
51 |
[117.64] when random tech bros are exploring the wonder of AR by recording banal interactions [122.38]
|
52 |
[122.38] with identifiable service workers [123.94]
|
53 |
[123.94] and posting it on the internet. [125.26]
|
|
|
168 |
|
169 |
[425.4] to offer taxpayers with simple returns [427.72]
|
170 |
[427.72] a free alternative for filing their taxes. [429.94]
|
171 |
+
[430.38] Those companies then made those free alternatives harder to find than a reason to get up in the morning [434.94]
|
172 |
[435.34] or a reason to go to bed at night. [437.04]
|
173 |
[437.16] Why is it both? [437.94]
|
174 |
[441.0] When the IRS banned them from doing that, [442.74]
|
data/2023-10-25 Windows' Apple Silicon Moment is Coming [xFvGERpRUpM].cn.txt
CHANGED
@@ -40,8 +40,8 @@
|
|
40 |
|
41 |
[129.4] 多位消息人士称NVIDIA正在准备一款RTX 4070 Super [132.78]
|
42 |
[132.78] 和一款拥有20GB显存的RTX 4080的Super版 [137.38]
|
43 |
-
[137.82] NVIDIA自从2020年4月发布RTX 2080 Super以来 [140.
|
44 |
-
[140.
|
45 |
[144.3] 也许是因为NVIDIA进入了一个更加 [146.82]
|
46 |
[147.42] 坏人的阶段 [148.34]
|
47 |
[148.76] 所以不值得Super的名号 [149.52]
|
|
|
40 |
|
41 |
[129.4] 多位消息人士称NVIDIA正在准备一款RTX 4070 Super [132.78]
|
42 |
[132.78] 和一款拥有20GB显存的RTX 4080的Super版 [137.38]
|
43 |
+
[137.82] NVIDIA自从2020年4月发布RTX 2080 Super以来 [140.16]
|
44 |
+
[140.16] 就再也没有使用过"Super"这个名号 [143.8]
|
45 |
[144.3] 也许是因为NVIDIA进入了一个更加 [146.82]
|
46 |
[147.42] 坏人的阶段 [148.34]
|
47 |
[148.76] 所以不值得Super的名号 [149.52]
|
data/2023-10-28 Google Pays Apple EVERY YEAR [qvZAYJLmzuk].cn.txt
CHANGED
@@ -66,7 +66,7 @@
|
|
66 |
[185.48] 现在可能对他们将不得不重做的文书工作感到恼火 [188.44]
|
67 |
[188.44] 因为他们批准了127亿美元 [191.66]
|
68 |
[191.66] 为合并提供资金援助 [193.54]
|
69 |
-
[195.74] 哦 [196.
|
70 |
[196.6] 爱的魔力转圈圈 [198.3]
|
71 |
|
72 |
[198.56] 这不是第一次西数试图与铠侠合并 [202.84]
|
|
|
66 |
[185.48] 现在可能对他们将不得不重做的文书工作感到恼火 [188.44]
|
67 |
[188.44] 因为他们批准了127亿美元 [191.66]
|
68 |
[191.66] 为合并提供资金援助 [193.54]
|
69 |
+
[195.74] 哦 [196.36]
|
70 |
[196.6] 爱的魔力转圈圈 [198.3]
|
71 |
|
72 |
[198.56] 这不是第一次西数试图与铠侠合并 [202.84]
|
generate_chatgpt_varlen.py
ADDED
@@ -0,0 +1,214 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
import json
|
2 |
+
import os
|
3 |
+
import copy
|
4 |
+
import re
|
5 |
+
from os import listdir
|
6 |
+
from os.path import isfile, join
|
7 |
+
import argparse
|
8 |
+
import sys
|
9 |
+
|
10 |
+
INSTRUCT_CHUNKED_PROMPT = """你是一个擅长翻译科技新闻的翻译专家。请将以下内容翻译为中文,使用相同格式输出,并保留时间戳。不要漏掉任何信息。合并多行文本时,保留第一个和最后一个时间戳。
|
11 |
+
"""
|
12 |
+
|
13 |
+
def break_line(line: str):
|
14 |
+
pattern = re.compile(r"^\[(\d+.\d+)\](.*)\[(\d+.\d+)\]$")
|
15 |
+
match = pattern.match(line)
|
16 |
+
start_time = match.group(1)
|
17 |
+
text = match.group(2)
|
18 |
+
end_time = match.group(3)
|
19 |
+
return start_time, text.strip(), end_time, float(start_time), float(end_time)
|
20 |
+
|
21 |
+
def get_total_chars(cn_lines: list[str], en_lines: list[str]):
|
22 |
+
cn_total_chars = 0
|
23 |
+
en_total_chars = 0
|
24 |
+
|
25 |
+
for line in cn_lines:
|
26 |
+
cn_total_chars += len(line)
|
27 |
+
|
28 |
+
for line in en_lines:
|
29 |
+
en_total_chars += len(line)
|
30 |
+
|
31 |
+
return cn_total_chars + en_total_chars
|
32 |
+
|
33 |
+
def chunk_messages(cn_lines: list[str], en_lines: list[str], MAX_LEN: int = 2000):
|
34 |
+
cn_lines_copy = copy.deepcopy(cn_lines)
|
35 |
+
en_lines_copy = copy.deepcopy(en_lines)
|
36 |
+
|
37 |
+
final_chunks: list[tuple[list[str], list[str]]] = []
|
38 |
+
|
39 |
+
while True:
|
40 |
+
en_current_chunk = []
|
41 |
+
cn_current_chunk = []
|
42 |
+
while True:
|
43 |
+
curr_total_len = get_total_chars(cn_current_chunk, en_current_chunk)
|
44 |
+
if len(cn_lines_copy) == 0 or len(en_lines_copy) == 0:
|
45 |
+
final_chunks.append((cn_current_chunk, en_current_chunk))
|
46 |
+
return final_chunks
|
47 |
+
elif curr_total_len > MAX_LEN:
|
48 |
+
final_chunks.append((cn_current_chunk, en_current_chunk))
|
49 |
+
break
|
50 |
+
else:
|
51 |
+
# Try append a new line to current chunk
|
52 |
+
latest_cn_line = cn_lines_copy.pop(0)
|
53 |
+
cn_start, cn_text, cn_end, cn_start_f, cn_end_f = break_line(latest_cn_line)
|
54 |
+
cn_current_chunk.append(latest_cn_line)
|
55 |
+
while True:
|
56 |
+
latest_en_line = en_lines_copy.pop(0)
|
57 |
+
en_start, en_text, en_end, en_start_f, en_end_f = break_line(latest_en_line)
|
58 |
+
en_current_chunk.append(latest_en_line)
|
59 |
+
if en_end == cn_end:
|
60 |
+
break
|
61 |
+
else:
|
62 |
+
if en_start_f > cn_end_f:
|
63 |
+
raise Exception("English and Chinese lines are not in sync. Offensing line: " + latest_cn_line)
|
64 |
+
|
65 |
+
def new_message(eng_in, chs_out, prev_in = None, prev_out = None):
|
66 |
+
if(prev_in == None or prev_out == None):
|
67 |
+
return {"messages": [
|
68 |
+
{"role": "system", "content": INSTRUCT_CHUNKED_PROMPT},
|
69 |
+
{"role": "user", "content": eng_in},
|
70 |
+
{"role": "assistant", "content": chs_out}]
|
71 |
+
}
|
72 |
+
else:
|
73 |
+
return {"messages": [
|
74 |
+
{"role": "system", "content": INSTRUCT_CHUNKED_PROMPT},
|
75 |
+
{"role": "user", "content": prev_in},
|
76 |
+
{"role": "assistant", "content": prev_out},
|
77 |
+
{"role": "user", "content": eng_in},
|
78 |
+
{"role": "assistant", "content": chs_out}]
|
79 |
+
}
|
80 |
+
|
81 |
+
def write_jsonl(message_groups, filename):
|
82 |
+
json_lines = []
|
83 |
+
with open(filename, "w", encoding='utf-8-sig') as fout:
|
84 |
+
for i in range(len(message_groups)):
|
85 |
+
if(i>0):
|
86 |
+
msg_obj = new_message(
|
87 |
+
message_groups[i][0].strip(),
|
88 |
+
message_groups[i][1].strip(),
|
89 |
+
message_groups[i-1][0].strip(),
|
90 |
+
message_groups[i-1][1].strip()
|
91 |
+
)
|
92 |
+
else:
|
93 |
+
msg_obj = new_message(
|
94 |
+
message_groups[i][0].strip(),
|
95 |
+
message_groups[i][1].strip()
|
96 |
+
)
|
97 |
+
json.dump(msg_obj, fout)
|
98 |
+
fout.write("\n")
|
99 |
+
json_lines.append(json.dumps(msg_obj))
|
100 |
+
return json_lines
|
101 |
+
|
102 |
+
if __name__ == "__main__":
|
103 |
+
parser = argparse.ArgumentParser(description='Generate ChatGPT training data from a directory of subtitle files.')
|
104 |
+
parser.add_argument('data_dir', type=str, help='The directory containing the subtitle files.', default="data")
|
105 |
+
parser.add_argument('--maxlen', type=int, help='The maximum length of a combined message. \nNote that this limit will be exceeded a little bit, so leave some headroom. \nRecommended value is max context length / 4.', default=2000)
|
106 |
+
parser.add_argument('--test-ratio', type=float, help='The ratio of test data to training data.', default=0.2)
|
107 |
+
|
108 |
+
args = parser.parse_args()
|
109 |
+
|
110 |
+
message_groups = []
|
111 |
+
|
112 |
+
DOCUMENT_ROOT = args.data_dir
|
113 |
+
files = listdir(DOCUMENT_ROOT)
|
114 |
+
files = list(filter(lambda x: x.endswith(".en.txt"), files))
|
115 |
+
files.sort()
|
116 |
+
|
117 |
+
print(files)
|
118 |
+
|
119 |
+
for f in files:
|
120 |
+
en_fname = join(DOCUMENT_ROOT, f)
|
121 |
+
if en_fname.endswith(".en.txt") and isfile(en_fname):
|
122 |
+
cn_fname = join(DOCUMENT_ROOT, f.replace(".en.txt", ".cn.txt"))
|
123 |
+
if os.path.exists(cn_fname) and isfile(cn_fname):
|
124 |
+
print(f"Found data pair: {en_fname} and {cn_fname}")
|
125 |
+
|
126 |
+
with open(en_fname, "r", encoding='utf-8-sig') as enfin:
|
127 |
+
en_messages = enfin.read()
|
128 |
+
|
129 |
+
with open(cn_fname, "r", encoding='utf-8-sig') as cnfin:
|
130 |
+
cn_messages = cnfin.read()
|
131 |
+
|
132 |
+
en_messages = [part.strip() for part in en_messages.split("\n") if part.strip() != ""]
|
133 |
+
cn_messages = [part.strip() for part in cn_messages.split("\n") if part.strip() != ""]
|
134 |
+
|
135 |
+
try:
|
136 |
+
chunks = chunk_messages(cn_messages, en_messages, MAX_LEN=args.maxlen)
|
137 |
+
en_messages = []
|
138 |
+
cn_messages = []
|
139 |
+
for chunk in chunks:
|
140 |
+
cn_chunk, en_chunk = chunk
|
141 |
+
en_messages.append("\n".join(en_chunk))
|
142 |
+
cn_messages.append("\n".join(cn_chunk))
|
143 |
+
print("\n".join(en_chunk))
|
144 |
+
print("---")
|
145 |
+
print("\n".join(cn_chunk))
|
146 |
+
print("\n")
|
147 |
+
except Exception as e:
|
148 |
+
print(f"Error: {e}")
|
149 |
+
continue
|
150 |
+
|
151 |
+
if(len(en_messages) != len(cn_messages)):
|
152 |
+
print(f"English and Chinese version mismatch. Discarding {en_fname} pair.")
|
153 |
+
|
154 |
+
messages = zip(en_messages, cn_messages)
|
155 |
+
|
156 |
+
message_groups.extend(messages)
|
157 |
+
|
158 |
+
jsonl_lines = write_jsonl(message_groups, f"combined-{args.maxlen}.jsonl")
|
159 |
+
|
160 |
+
import random
|
161 |
+
random.shuffle(jsonl_lines)
|
162 |
+
|
163 |
+
TEST_RATIO = args.test_ratio
|
164 |
+
|
165 |
+
split_index = int(len(jsonl_lines) * TEST_RATIO)
|
166 |
+
|
167 |
+
test = jsonl_lines[:split_index]
|
168 |
+
train = jsonl_lines[split_index:]
|
169 |
+
|
170 |
+
with open (f"chatgpt-train-{args.maxlen}.jsonl", "w", encoding='utf-8-sig') as fout:
|
171 |
+
for line in train:
|
172 |
+
fout.write(line + "\n")
|
173 |
+
|
174 |
+
with open (f"chatgpt-test-{args.maxlen}.jsonl", "w", encoding='utf-8-sig') as fout:
|
175 |
+
for line in test:
|
176 |
+
fout.write(line + "\n")
|
177 |
+
|
178 |
+
# recent_files = files[-5:]
|
179 |
+
# recent_messages = []
|
180 |
+
|
181 |
+
# for f in recent_files:
|
182 |
+
# en_fname = join(DOCUMENT_ROOT, f)
|
183 |
+
# if en_fname.endswith(".en.txt") and isfile(en_fname):
|
184 |
+
# cn_fname = join(DOCUMENT_ROOT, f.replace(".en.txt", ".cn.txt"))
|
185 |
+
# if os.path.exists(cn_fname) and isfile(cn_fname):
|
186 |
+
# print(f"Found data pair: {en_fname} and {cn_fname}")
|
187 |
+
|
188 |
+
# with open(en_fname, "r", encoding='utf-8-sig') as enfin:
|
189 |
+
# en_messages = enfin.read()
|
190 |
+
|
191 |
+
# with open(cn_fname, "r", encoding='utf-8-sig') as cnfin:
|
192 |
+
# cn_messages = cnfin.read()
|
193 |
+
|
194 |
+
# en_messages = [part.strip() for part in en_messages.split("\n") if part.strip() != ""]
|
195 |
+
# cn_messages = [part.strip() for part in cn_messages.split("\n") if part.strip() != ""]
|
196 |
+
|
197 |
+
# if(len(en_messages) != len(cn_messages)):
|
198 |
+
# print(f"English and Chinese version mismatch. Discarding {en_fname} pair.")
|
199 |
+
|
200 |
+
# messages = zip(en_messages, cn_messages)
|
201 |
+
|
202 |
+
# recent_messages.extend(messages)
|
203 |
+
|
204 |
+
# write_jsonl(recent_messages, "recent-combined.jsonl")
|
205 |
+
|
206 |
+
# TEST_RATIO = 0.2
|
207 |
+
|
208 |
+
# split_index = int(len(recent_messages) * TEST_RATIO)
|
209 |
+
|
210 |
+
# test = recent_messages[:split_index]
|
211 |
+
# train = recent_messages[split_index:]
|
212 |
+
|
213 |
+
# write_jsonl(train, "chatgpt-recent-train.jsonl")
|
214 |
+
# write_jsonl(test, "chatgpt-recent-test.jsonl")
|