Upload folder using huggingface_hub
Browse files- .DS_Store +0 -0
- .mdl +0 -0
- .msc +0 -0
- .mv +1 -0
- README.md +275 -0
- config.yaml +46 -0
- configuration.json +13 -0
- example/punc_example.txt +3 -0
- fig/struct.png +0 -0
- model.onnx +3 -0
- model.pt +3 -0
- tokens.json +0 -0
.DS_Store
ADDED
Binary file (6.15 kB). View file
|
|
.mdl
ADDED
Binary file (80 Bytes). View file
|
|
.msc
ADDED
Binary file (566 Bytes). View file
|
|
.mv
ADDED
@@ -0,0 +1 @@
|
|
|
|
|
1 |
+
Revision:v2.0.4,CreatedAt:1706015306
|
README.md
ADDED
@@ -0,0 +1,275 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
tasks:
|
3 |
+
- punctuation
|
4 |
+
domain:
|
5 |
+
- audio
|
6 |
+
model-type:
|
7 |
+
- Classification
|
8 |
+
frameworks:
|
9 |
+
- pytorch
|
10 |
+
metrics:
|
11 |
+
- f1_score
|
12 |
+
license: apache-2.0
|
13 |
+
language:
|
14 |
+
- cn
|
15 |
+
tags:
|
16 |
+
- FunASR
|
17 |
+
- CT-Transformer
|
18 |
+
- Alibaba
|
19 |
+
- ICASSP 2020
|
20 |
+
datasets: 'commonzh-36k'
|
21 |
+
widgets:
|
22 |
+
- task: punctuation
|
23 |
+
inputs:
|
24 |
+
- type: text
|
25 |
+
name: input
|
26 |
+
title: 文本
|
27 |
+
examples:
|
28 |
+
- name: 1
|
29 |
+
title: 示例1
|
30 |
+
inputs:
|
31 |
+
- name: input
|
32 |
+
data: 我们都是木头人不会讲话不会动
|
33 |
+
inferencespec:
|
34 |
+
cpu: 1 #CPU数量
|
35 |
+
memory: 4096
|
36 |
+
---
|
37 |
+
|
38 |
+
# Controllable Time-delay Transformer模型介绍
|
39 |
+
|
40 |
+
[//]: # (Controllable Time-delay Transformer 模型是一种端到端标点分类模型。)
|
41 |
+
|
42 |
+
[//]: # (常规的Transformer会依赖很远的未来信息,导致长时间结果不固定。Controllable Time-delay Transformer 在效果无损的情况下,有效控制标点的延时。)
|
43 |
+
|
44 |
+
# Highlights
|
45 |
+
- 中文标点通用模型:可用于语音识别模型输出文本的标点预测。
|
46 |
+
- 基于[Paraformer-large长音频模型](https://www.modelscope.cn/models/damo/speech_paraformer-large-vad-punc_asr_nat-zh-cn-16k-common-vocab8404-pytorch/summary)场景的使用
|
47 |
+
- 基于[FunASR框架](https://github.com/alibaba-damo-academy/FunASR),可进行ASR,VAD,标点的自由组合
|
48 |
+
- 基于纯文本输入的标点预测
|
49 |
+
|
50 |
+
## <strong>[FunASR开源项目介绍](https://github.com/alibaba-damo-academy/FunASR)</strong>
|
51 |
+
<strong>[FunASR](https://github.com/alibaba-damo-academy/FunASR)</strong>希望在语音识别的学术研究和工业应用之间架起一座桥梁。通过发布工业级语音识别模型的训练和微调,研究人员和开发人员可以更方便地进行语音识别模型的研究和生产,并推动语音识别生态的发展。让语音识别更有趣!
|
52 |
+
|
53 |
+
[**github仓库**](https://github.com/alibaba-damo-academy/FunASR)
|
54 |
+
| [**最新动态**](https://github.com/alibaba-damo-academy/FunASR#whats-new)
|
55 |
+
| [**环境安装**](https://github.com/alibaba-damo-academy/FunASR#installation)
|
56 |
+
| [**服务部署**](https://www.funasr.com)
|
57 |
+
| [**模型库**](https://github.com/alibaba-damo-academy/FunASR/tree/main/model_zoo)
|
58 |
+
| [**联系我们**](https://github.com/alibaba-damo-academy/FunASR#contact)
|
59 |
+
|
60 |
+
|
61 |
+
## 模型原理介绍
|
62 |
+
|
63 |
+
Controllable Time-delay Transformer是达摩院语音团队提出的高效后处理框架中的标点模块。本项目为中文通用标点模型,模型可以被应用于文本类输入的标点预测,也可应用于语音识别结果的后处理步骤,协助语音识别模块输出具有可读性的文本结果。
|
64 |
+
|
65 |
+
<p align="center">
|
66 |
+
<img src="fig/struct.png" alt="Controllable Time-delay Transformer模型结构" width="500" />
|
67 |
+
|
68 |
+
Controllable Time-delay Transformer 模型结构如上图所示,由 Embedding、Encoder 和 Predictor 三部分组成。Embedding 是词向量叠加位置向量。Encoder可以采用不同的网络结构,例如self-attention,conformer,SAN-M等。Predictor 预测每个token后的标点类型。
|
69 |
+
|
70 |
+
在模型的选择上采用了性能优越的Transformer模型。Transformer模型在获得良好性能的同时,由于模型自身序列化输入等特性,会给系统带来较大时延。常规的Transformer可以看到未来的全部信息,导致标点会依赖很远的未来信息。这会给用户带来一种标点一直在变化刷新,长时间结果不固定的不良感受。基于这一问题,我们创新性的提出了可控时延的Transformer模型(Controllable Time-Delay Transformer, CT-Transformer),在模型性能无损失的情况下,有效控制标点的延时。
|
71 |
+
|
72 |
+
更详细的细节见:
|
73 |
+
- 论文: [CONTROLLABLE TIME-DELAY TRANSFORMER FOR REAL-TIME PUNCTUATION PREDICTION AND DISFLUENCY DETECTION](https://arxiv.org/pdf/2003.01309.pdf)
|
74 |
+
|
75 |
+
## 如何使用与训练自己的模型
|
76 |
+
|
77 |
+
本项目提供的预训练模型是基于大数据训练的通用领域识别模型,开发者可以基于此模型进一步利用ModelScope的微调功能或者本项目对应的Github代码仓库[FunASR](https://github.com/alibaba-damo-academy/FunASR)进一步进行模型的领域定制化。
|
78 |
+
|
79 |
+
### 在Notebook中开发
|
80 |
+
|
81 |
+
对于有开发需求的使用者,特别推荐您使用Notebook进行离线处理。先登录ModelScope账号,点击模型页面右上角的“在Notebook中打开”按钮出现对话框,首次使用会提示您关联阿里云账号,按提示操作即可。关联账号后可进入选择启动实例界面,选择计算资源,建立实例,待实例创建完成后进入开发环境,进行调用。
|
82 |
+
|
83 |
+
|
84 |
+
#### 基于ModelScope进行推理
|
85 |
+
|
86 |
+
以下为三种支持格式及api调用方式参考如下范例:
|
87 |
+
- text.scp文件路径,例如example/punc_example.txt,格式为: key + "\t" + value
|
88 |
+
```sh
|
89 |
+
cat example/punc_example.txt
|
90 |
+
1 跨境河流是养育沿岸人民的生命之源
|
91 |
+
2 从存储上来说仅仅是全景图片它就会是图片的四倍的容量
|
92 |
+
3 那今天的会就到这里吧happy new year明年见
|
93 |
+
```
|
94 |
+
```python
|
95 |
+
from modelscope.pipelines import pipeline
|
96 |
+
from modelscope.utils.constant import Tasks
|
97 |
+
|
98 |
+
inference_pipline = pipeline(
|
99 |
+
task=Tasks.punctuation,
|
100 |
+
model='damo/punc_ct-transformer_zh-cn-common-vocab272727-pytorch',
|
101 |
+
model_revision="v2.0.4")
|
102 |
+
|
103 |
+
rec_result = inference_pipline('example/punc_example.txt')
|
104 |
+
print(rec_result)
|
105 |
+
```
|
106 |
+
- text二进制数据,例如:用户直接从文件里读出bytes数据
|
107 |
+
```python
|
108 |
+
rec_result = inference_pipline('我们都是木头人不会讲话不会动')
|
109 |
+
```
|
110 |
+
- text文件url,例如:https://isv-data.oss-cn-hangzhou.aliyuncs.com/ics/MaaS/ASR/test_text/punc_example.txt
|
111 |
+
```python
|
112 |
+
rec_result = inference_pipline('https://isv-data.oss-cn-hangzhou.aliyuncs.com/ics/MaaS/ASR/test_text/punc_example.txt')
|
113 |
+
```
|
114 |
+
|
115 |
+
|
116 |
+
## 基于FunASR进行推理
|
117 |
+
|
118 |
+
下面为快速上手教程,测试音频([中文](https://isv-data.oss-cn-hangzhou.aliyuncs.com/ics/MaaS/ASR/test_audio/vad_example.wav),[英文](https://isv-data.oss-cn-hangzhou.aliyuncs.com/ics/MaaS/ASR/test_audio/asr_example_en.wav))
|
119 |
+
|
120 |
+
### 可执行命令行
|
121 |
+
在命令行终端执行:
|
122 |
+
|
123 |
+
```shell
|
124 |
+
funasr +model=paraformer-zh +vad_model="fsmn-vad" +punc_model="ct-punc" +input=vad_example.wav
|
125 |
+
```
|
126 |
+
|
127 |
+
注:支持单条音频文件识别,也支持文件列表,列表为kaldi风格wav.scp:`wav_id wav_path`
|
128 |
+
|
129 |
+
### python示例
|
130 |
+
#### 非实时语音识别
|
131 |
+
```python
|
132 |
+
from funasr import AutoModel
|
133 |
+
# paraformer-zh is a multi-functional asr model
|
134 |
+
# use vad, punc, spk or not as you need
|
135 |
+
model = AutoModel(model="paraformer-zh", model_revision="v2.0.4",
|
136 |
+
vad_model="fsmn-vad", vad_model_revision="v2.0.4",
|
137 |
+
punc_model="ct-punc-c", punc_model_revision="v2.0.4",
|
138 |
+
# spk_model="cam++", spk_model_revision="v2.0.2",
|
139 |
+
)
|
140 |
+
res = model.generate(input=f"{model.model_path}/example/asr_example.wav",
|
141 |
+
batch_size_s=300,
|
142 |
+
hotword='魔搭')
|
143 |
+
print(res)
|
144 |
+
```
|
145 |
+
注:`model_hub`:表示模型仓库,`ms`为选择modelscope下载,`hf`为选择huggingface下载。
|
146 |
+
|
147 |
+
#### 实时语音识别
|
148 |
+
|
149 |
+
```python
|
150 |
+
from funasr import AutoModel
|
151 |
+
|
152 |
+
chunk_size = [0, 10, 5] #[0, 10, 5] 600ms, [0, 8, 4] 480ms
|
153 |
+
encoder_chunk_look_back = 4 #number of chunks to lookback for encoder self-attention
|
154 |
+
decoder_chunk_look_back = 1 #number of encoder chunks to lookback for decoder cross-attention
|
155 |
+
|
156 |
+
model = AutoModel(model="paraformer-zh-streaming", model_revision="v2.0.4")
|
157 |
+
|
158 |
+
import soundfile
|
159 |
+
import os
|
160 |
+
|
161 |
+
wav_file = os.path.join(model.model_path, "example/asr_example.wav")
|
162 |
+
speech, sample_rate = soundfile.read(wav_file)
|
163 |
+
chunk_stride = chunk_size[1] * 960 # 600ms
|
164 |
+
|
165 |
+
cache = {}
|
166 |
+
total_chunk_num = int(len((speech)-1)/chunk_stride+1)
|
167 |
+
for i in range(total_chunk_num):
|
168 |
+
speech_chunk = speech[i*chunk_stride:(i+1)*chunk_stride]
|
169 |
+
is_final = i == total_chunk_num - 1
|
170 |
+
res = model.generate(input=speech_chunk, cache=cache, is_final=is_final, chunk_size=chunk_size, encoder_chunk_look_back=encoder_chunk_look_back, decoder_chunk_look_back=decoder_chunk_look_back)
|
171 |
+
print(res)
|
172 |
+
```
|
173 |
+
|
174 |
+
注:`chunk_size`为流式延时配置,`[0,10,5]`表示上屏实时出字粒度为`10*60=600ms`,未来信息为`5*60=300ms`。每次推理输入为`600ms`(采样点数为`16000*0.6=960`),输出为对应文字,最后一个语音片段输入需要设置`is_final=True`来强制输出最后一个字。
|
175 |
+
|
176 |
+
#### 语音端点检测(非实时)
|
177 |
+
```python
|
178 |
+
from funasr import AutoModel
|
179 |
+
|
180 |
+
model = AutoModel(model="fsmn-vad", model_revision="v2.0.4")
|
181 |
+
|
182 |
+
wav_file = f"{model.model_path}/example/asr_example.wav"
|
183 |
+
res = model.generate(input=wav_file)
|
184 |
+
print(res)
|
185 |
+
```
|
186 |
+
|
187 |
+
#### 语音端点检测(实时)
|
188 |
+
```python
|
189 |
+
from funasr import AutoModel
|
190 |
+
|
191 |
+
chunk_size = 200 # ms
|
192 |
+
model = AutoModel(model="fsmn-vad", model_revision="v2.0.4")
|
193 |
+
|
194 |
+
import soundfile
|
195 |
+
|
196 |
+
wav_file = f"{model.model_path}/example/vad_example.wav"
|
197 |
+
speech, sample_rate = soundfile.read(wav_file)
|
198 |
+
chunk_stride = int(chunk_size * sample_rate / 1000)
|
199 |
+
|
200 |
+
cache = {}
|
201 |
+
total_chunk_num = int(len((speech)-1)/chunk_stride+1)
|
202 |
+
for i in range(total_chunk_num):
|
203 |
+
speech_chunk = speech[i*chunk_stride:(i+1)*chunk_stride]
|
204 |
+
is_final = i == total_chunk_num - 1
|
205 |
+
res = model.generate(input=speech_chunk, cache=cache, is_final=is_final, chunk_size=chunk_size)
|
206 |
+
if len(res[0]["value"]):
|
207 |
+
print(res)
|
208 |
+
```
|
209 |
+
|
210 |
+
#### 标点恢复
|
211 |
+
```python
|
212 |
+
from funasr import AutoModel
|
213 |
+
|
214 |
+
model = AutoModel(model="ct-punc", model_revision="v2.0.4")
|
215 |
+
|
216 |
+
res = model.generate(input="那今天的会就到这里吧 happy new year 明年见")
|
217 |
+
print(res)
|
218 |
+
```
|
219 |
+
|
220 |
+
#### 时间戳预测
|
221 |
+
```python
|
222 |
+
from funasr import AutoModel
|
223 |
+
|
224 |
+
model = AutoModel(model="fa-zh", model_revision="v2.0.4")
|
225 |
+
|
226 |
+
wav_file = f"{model.model_path}/example/asr_example.wav"
|
227 |
+
text_file = f"{model.model_path}/example/text.txt"
|
228 |
+
res = model.generate(input=(wav_file, text_file), data_type=("sound", "text"))
|
229 |
+
print(res)
|
230 |
+
```
|
231 |
+
|
232 |
+
更多详细用法([示例](https://github.com/alibaba-damo-academy/FunASR/tree/main/examples/industrial_data_pretraining))
|
233 |
+
|
234 |
+
|
235 |
+
## 微调
|
236 |
+
|
237 |
+
详细用法([示例](https://github.com/alibaba-damo-academy/FunASR/tree/main/examples/industrial_data_pretraining))
|
238 |
+
|
239 |
+
|
240 |
+
|
241 |
+
|
242 |
+
|
243 |
+
## Benchmark
|
244 |
+
中文标点预测通用模型在自采集的通用领域业务场景数据上有良好效果。训练数据大约33M个sample,每个sample可能包含1句或多句。
|
245 |
+
|
246 |
+
### 自采集数据(20000+ samples)
|
247 |
+
|
248 |
+
| precision | recall | f1_score |
|
249 |
+
|:------------------------------------:|:-------------------------------------:|:-------------------------------------:|
|
250 |
+
| <div style="width: 150pt">53.8</div> | <div style="width: 150pt">60.0</div> | <div style="width: 150pt">56.5</div> |
|
251 |
+
|
252 |
+
## 使用方式以及适用范围
|
253 |
+
|
254 |
+
运行范围
|
255 |
+
- 支持Linux-x86_64、Mac和Windows运行。
|
256 |
+
|
257 |
+
使用方式
|
258 |
+
- 直接推理:可以直接对输入文本进行计算,输出带有标点的目标文字。
|
259 |
+
|
260 |
+
使用范围与目标场景
|
261 |
+
- 适合对文本数据进行标点预测,文本长度不限。
|
262 |
+
|
263 |
+
## 相关论文以及引用信息
|
264 |
+
|
265 |
+
```BibTeX
|
266 |
+
@inproceedings{chen2020controllable,
|
267 |
+
title={Controllable Time-Delay Transformer for Real-Time Punctuation Prediction and Disfluency Detection},
|
268 |
+
author={Chen, Qian and Chen, Mengzhe and Li, Bo and Wang, Wen},
|
269 |
+
booktitle={ICASSP 2020-2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
|
270 |
+
pages={8069--8073},
|
271 |
+
year={2020},
|
272 |
+
organization={IEEE}
|
273 |
+
}
|
274 |
+
```
|
275 |
+
|
config.yaml
ADDED
@@ -0,0 +1,46 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
model: CTTransformer
|
2 |
+
model_conf:
|
3 |
+
ignore_id: 0
|
4 |
+
embed_unit: 256
|
5 |
+
att_unit: 256
|
6 |
+
dropout_rate: 0.1
|
7 |
+
punc_list:
|
8 |
+
- <unk>
|
9 |
+
- _
|
10 |
+
- ,
|
11 |
+
- 。
|
12 |
+
- ?
|
13 |
+
- 、
|
14 |
+
punc_weight:
|
15 |
+
- 1.0
|
16 |
+
- 1.0
|
17 |
+
- 1.0
|
18 |
+
- 1.0
|
19 |
+
- 1.0
|
20 |
+
- 1.0
|
21 |
+
sentence_end_id: 3
|
22 |
+
|
23 |
+
encoder: SANMEncoder
|
24 |
+
encoder_conf:
|
25 |
+
input_size: 256
|
26 |
+
output_size: 256
|
27 |
+
attention_heads: 8
|
28 |
+
linear_units: 1024
|
29 |
+
num_blocks: 4
|
30 |
+
dropout_rate: 0.1
|
31 |
+
positional_dropout_rate: 0.1
|
32 |
+
attention_dropout_rate: 0.0
|
33 |
+
input_layer: pe
|
34 |
+
pos_enc_class: SinusoidalPositionEncoder
|
35 |
+
normalize_before: true
|
36 |
+
kernel_size: 11
|
37 |
+
sanm_shfit: 0
|
38 |
+
selfattention_layer_type: sanm
|
39 |
+
padding_idx: 0
|
40 |
+
|
41 |
+
tokenizer: CharTokenizer
|
42 |
+
tokenizer_conf:
|
43 |
+
unk_symbol: <unk>
|
44 |
+
|
45 |
+
|
46 |
+
|
configuration.json
ADDED
@@ -0,0 +1,13 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
{
|
2 |
+
"framework": "pytorch",
|
3 |
+
"task" : "punctuation",
|
4 |
+
"model": {"type" : "funasr"},
|
5 |
+
"pipeline": {"type":"funasr-pipeline"},
|
6 |
+
"model_name_in_hub": {
|
7 |
+
"ms":"iic/punc_ct-transformer_zh-cn-common-vocab272727-pytorch",
|
8 |
+
"hf":""},
|
9 |
+
"file_path_metas": {
|
10 |
+
"init_param":"model.pt",
|
11 |
+
"config":"config.yaml",
|
12 |
+
"tokenizer_conf": {"token_list": "tokens.json"}}
|
13 |
+
}
|
example/punc_example.txt
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
1 跨境河流是养育沿岸人民的生命之源长期以来为帮助下游地区防灾减灾中方技术人员在上游地区极为恶劣的自然条件下克服巨大困难甚至冒着生命危险向印方提供汛期水文资料处理紧急事件中方重视印方在跨境河流问题上的关切愿意进一步完善双方联合工作机制凡是中方能做的我们都会去做而且会做得更好我请印度朋友们放心中国在上游的任何开发利用都会经过科学规划和论证兼顾上下游的利益
|
2 |
+
2 从存储上来说仅仅是全景图片它就会是图片的四倍的容量然后全景的视频会是普通视频八倍的这个存储的容要求而三d的模型会是图片的十倍这都对我们今天运行在的云计算的平台存储的平台提出了更高的要求
|
3 |
+
3 那今天的会就到这里吧 happy new year 明年见
|
fig/struct.png
ADDED
![]() |
model.onnx
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:46509b8cb8e609528ba2458492410e8570e3bffa06d751ee368b82a6fa13d19f
|
3 |
+
size 292073792
|
model.pt
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:a5818bb9d933805a916eebe41eb41648f7f9caad30b4bd59d56f3ca135421916
|
3 |
+
size 291979892
|
tokens.json
ADDED
The diff for this file is too large to render.
See raw diff
|
|