csukuangfj commited on
Commit
be5bd12
1 Parent(s): c7c0e58

update readme.

Browse files
Files changed (1) hide show
  1. README.md +147 -0
README.md ADDED
@@ -0,0 +1,147 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Introduction
2
+
3
+ This repo contains pre-trained model using
4
+ <https://github.com/k2-fsa/icefall/pull/213>.
5
+
6
+ It is trained on full LibriSpeech dataset.
7
+ Also, it uses the `L` subset from [GigaSpeech](https://github.com/SpeechColab/GigaSpeech)
8
+ as extra training data.
9
+
10
+ ## How to clone this repo
11
+ ```
12
+ sudo apt-get install git-lfs
13
+ git clone https://huggingface.co/csukuangfj/icefall-asr-librispeech-transducer-stateless-multi-datasets-bpe-500-2022-03-01
14
+
15
+
16
+ cd icefall-asr-librispeech-transducer-stateless-multi-datasets-bpe-500-2022-03-01
17
+ git lfs pull
18
+ ```
19
+
20
+ **Catuion**: You have to run `git lfs pull`. Otherwise, you will be SAD later.
21
+
22
+ The model in this repo is trained using the commit `TODO`.
23
+
24
+ You can use
25
+
26
+ ```
27
+ git clone https://github.com/k2-fsa/icefall
28
+ cd icefall
29
+ git checkout TODO
30
+ ```
31
+ to download `icefall`.
32
+
33
+ You can find the model information by visiting <https://github.com/k2-fsa/icefall/blob/TODO/egs/librispeech/ASR/transducer_stateless_multi_datasets/train.py#L198>.
34
+
35
+ In short, the encoder is a Conformer model with 8 heads, 12 encoder layers, 512-dim attention, 2048-dim feedforward;
36
+ the decoder contains a 1024-dim embedding layer and a Conv1d with kernel size 2.
37
+
38
+ The decoder architecture is modified from
39
+ [Rnn-Transducer with Stateless Prediction Network](https://ieeexplore.ieee.org/document/9054419).
40
+ A Conv1d layer is placed right after the input embedding layer.
41
+
42
+ -----
43
+
44
+ ## Description
45
+
46
+ This repo provides pre-trained transducer Conformer model for the LibriSpeech dataset
47
+ using [icefall][icefall]. There are no RNNs in the decoder. The decoder is stateless
48
+ and contains only an embedding layer and a Conv1d.
49
+
50
+ The commands for training are:
51
+
52
+ ```
53
+ cd egs/librispeech/ASR/
54
+ ./prepare.sh
55
+ ./prepare_giga_speech.sh
56
+
57
+ export CUDA_VISIBLE_DEVICES="0,1,2,3"
58
+
59
+ ./transducer_stateless_multi_datasets/train.py \
60
+ --world-size 4 \
61
+ --num-epochs 40 \
62
+ --start-epoch 0 \
63
+ --exp-dir transducer_stateless_multi_datasets/exp-full-2 \
64
+ --full-libri 1 \
65
+ --max-duration 300 \
66
+ --lr-factor 5 \
67
+ --bpe-model data/lang_bpe_500/bpe.model \
68
+ --modified-transducer-prob 0.25 \
69
+ --giga-prob 0.2
70
+ ```
71
+
72
+ The tensorboard training log can be found at
73
+ <https://tensorboard.dev/experiment/xmo5oCgrRVelH9dCeOkYBg/>
74
+
75
+ The command for decoding is:
76
+
77
+ ```bash
78
+ epoch=39
79
+ avg=15
80
+ sym=1
81
+
82
+ # greedy search
83
+ ./transducer_stateless_multi_datasets/decode.py \
84
+ --epoch $epoch \
85
+ --avg $avg \
86
+ --exp-dir transducer_stateless_multi_datasets/exp-full-2 \
87
+ --bpe-model ./data/lang_bpe_500/bpe.model \
88
+ --max-duration 100 \
89
+ --context-size 2 \
90
+ --max-sym-per-frame $sym
91
+
92
+ # modified beam search
93
+ ./transducer_stateless_multi_datasets/decode.py \
94
+ --epoch $epoch \
95
+ --avg $avg \
96
+ --exp-dir transducer_stateless_multi_datasets/exp-full-2 \
97
+ --bpe-model ./data/lang_bpe_500/bpe.model \
98
+ --max-duration 100 \
99
+ --context-size 2 \
100
+ --decoding-method modified_beam_search \
101
+ --beam-size 4
102
+ ```
103
+
104
+ You can find the decoding log for the above command in this
105
+ repo (in the folder `log`).
106
+
107
+ The WERs for the test datasets are
108
+
109
+ | | test-clean | test-other | comment |
110
+ |-------------------------------------|------------|------------|------------------------------------------|
111
+ | greedy search (max sym per frame 1) | 2.64 | 6.55 | --epoch 39, --avg 15, --max-duration 100 |
112
+ | modified beam search (beam size 4) | 2.61 | 6.46 | --epoch 39, --avg 15, --max-duration 100 |
113
+
114
+
115
+ # File description
116
+
117
+ - [log][log], this directory contains the decoding log and decoding results
118
+ - [test_wavs][test_wavs], this directory contains wave files for testing the pre-trained model
119
+ - [data][data], this directory contains files generated by [prepare.sh][prepare]
120
+ - [exp][exp], this directory contains only one file: `preprained.pt`
121
+
122
+ `exp/pretrained.pt` is generated by the following command:
123
+
124
+ ```bash
125
+ ./transducer_stateless_multi_datasets/export.py \
126
+ --epoch 39 \
127
+ --avg 15 \
128
+ --bpe-model data/lang_bpe_500/bpe.model \
129
+ --exp-dir transducer_stateless_multi_datasets/exp-full-2
130
+ ```
131
+
132
+ **HINT**: To use `pretrained.pt` to compute the WER for test-clean and test-other,
133
+ just do the following:
134
+ ```
135
+ cp icefall-asr-librispeech-transducer-stateless-multi-datasets-bpe-500-2022-03-01/exp/pretrained.pt \
136
+ /path/to/icefall/egs/librispeech/ASR/transducer_stateless_multi_datasets/exp/epoch-999.pt
137
+ ```
138
+ and pass `--epoch 999 --avg 1` to `transducer_stateless_multi_datasets/decode.py`.
139
+
140
+
141
+ [icefall]: https://github.com/k2-fsa/icefall
142
+ [prepare]: https://github.com/k2-fsa/icefall/blob/master/egs/librispeech/ASR/prepare.sh
143
+ [exp]: https://huggingface.co/csukuangfj/icefall-asr-librispeech-transducer-stateless-multi-datasets-bpe-500-2022-03-01/tree/main/exp
144
+ [data]: https://huggingface.co/csukuangfj/icefall-asr-librispeech-transducer-stateless-multi-datasets-bpe-500-2022-03-01/tree/main/data
145
+ [test_wavs]: https://huggingface.co/csukuangfj/icefall-asr-librispeech-transducer-stateless-multi-datasets-bpe-500-2022-03-01/tree/main/test_wavs
146
+ [log]: https://huggingface.co/csukuangfj/icefall-asr-librispeech-transducer-stateless-multi-datasets-bpe-500-2022-03-01/tree/main/log
147
+ [icefall]: https://github.com/k2-fsa/icefall