csukuangfj
commited on
Commit
•
e224e71
1
Parent(s):
f8bdf58
add results for fast_beam_search.
Browse files- decoding_results/fast_beam_search/errs-dev-beam_4_max_contexts_4_max_states_8-epoch-29-avg-5-beam-4-max-contexts-4-max-states-8-use-averaged-model.txt +0 -0
- decoding_results/fast_beam_search/errs-test-beam_4_max_contexts_4_max_states_8-epoch-29-avg-5-beam-4-max-contexts-4-max-states-8-use-averaged-model.txt +0 -0
- decoding_results/fast_beam_search/log-decode-epoch-29-avg-5-beam-4-max-contexts-4-max-states-8-use-averaged-model-2022-06-19-12-28-33 +34 -0
- decoding_results/fast_beam_search/recogs-dev-beam_4_max_contexts_4_max_states_8-epoch-29-avg-5-beam-4-max-contexts-4-max-states-8-use-averaged-model.txt +0 -0
- decoding_results/fast_beam_search/recogs-test-beam_4_max_contexts_4_max_states_8-epoch-29-avg-5-beam-4-max-contexts-4-max-states-8-use-averaged-model.txt +0 -0
decoding_results/fast_beam_search/errs-dev-beam_4_max_contexts_4_max_states_8-epoch-29-avg-5-beam-4-max-contexts-4-max-states-8-use-averaged-model.txt
ADDED
The diff for this file is too large to render.
See raw diff
|
|
decoding_results/fast_beam_search/errs-test-beam_4_max_contexts_4_max_states_8-epoch-29-avg-5-beam-4-max-contexts-4-max-states-8-use-averaged-model.txt
ADDED
The diff for this file is too large to render.
See raw diff
|
|
decoding_results/fast_beam_search/log-decode-epoch-29-avg-5-beam-4-max-contexts-4-max-states-8-use-averaged-model-2022-06-19-12-28-33
ADDED
@@ -0,0 +1,34 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
2022-06-19 12:28:33,662 INFO [decode.py:497] Decoding started
|
2 |
+
2022-06-19 12:28:33,662 INFO [decode.py:503] Device: cuda:0
|
3 |
+
2022-06-19 12:28:33,899 INFO [lexicon.py:176] Loading pre-compiled data/lang_char/Linv.pt
|
4 |
+
2022-06-19 12:28:33,928 INFO [decode.py:509] {'best_train_loss': inf, 'best_valid_loss': inf, 'best_train_epoch': -1, 'best_valid_epoch': -1, 'batch_idx_train': 0, 'log_interval': 50, 'reset_interval': 200, 'valid_interval': 1000, 'feature_dim': 80, 'subsampling_factor': 4, 'model_warm_step': 3000, 'env_info': {'k2-version': '1.15.1', 'k2-build-type': 'Release', 'k2-with-cuda': True, 'k2-git-sha1': 'f8d2dba06c000ffee36aab5b66f24e7c9809f116', 'k2-git-date': 'Thu Apr 21 12:20:34 2022', 'lhotse-version': '1.3.0.dev+missing.version.file', 'torch-version': '1.10.0+cu102', 'torch-cuda-available': True, 'torch-cuda-version': '10.2', 'python-version': '3.8', 'icefall-git-branch': 'pruned-rnnt-aishell', 'icefall-git-sha1': 'd0a5f1d-dirty', 'icefall-git-date': 'Mon Jun 13 20:40:46 2022', 'icefall-path': '/k2-dev/fangjun/open-source/icefall-aishell', 'k2-path': '/ceph-fj/fangjun/open-source-2/k2-multi-22/k2/python/k2/__init__.py', 'lhotse-path': '/ceph-fj/fangjun/open-source-2/lhotse-jsonl/lhotse/__init__.py', 'hostname': 'de-74279-k2-train-7-0616225511-78bf4545d8-tv52r', 'IP address': '10.177.77.9'}, 'epoch': 29, 'iter': 0, 'avg': 5, 'use_averaged_model': True, 'exp_dir': PosixPath('pruned_transducer_stateless3/exp-context-size-1'), 'lang_dir': PosixPath('data/lang_char'), 'decoding_method': 'fast_beam_search', 'beam_size': 4, 'beam': 4, 'max_contexts': 4, 'max_states': 8, 'context_size': 1, 'max_sym_per_frame': 1, 'num_encoder_layers': 12, 'dim_feedforward': 2048, 'nhead': 8, 'encoder_dim': 512, 'decoder_dim': 512, 'joiner_dim': 512, 'max_duration': 600, 'bucketing_sampler': True, 'num_buckets': 30, 'shuffle': True, 'return_cuts': True, 'num_workers': 2, 'enable_spec_aug': True, 'spec_aug_time_warp_factor': 80, 'enable_musan': True, 'manifest_dir': PosixPath('data/fbank'), 'on_the_fly_feats': False, 'res_dir': PosixPath('pruned_transducer_stateless3/exp-context-size-1/fast_beam_search'), 'suffix': 'epoch-29-avg-5-beam-4-max-contexts-4-max-states-8-use-averaged-model', 'blank_id': 0, 'vocab_size': 4336}
|
5 |
+
2022-06-19 12:28:33,928 INFO [decode.py:511] About to create model
|
6 |
+
2022-06-19 12:28:34,716 INFO [decode.py:583] Calculating the averaged model over epoch range from 24 (excluded) to 29
|
7 |
+
2022-06-19 12:28:42,307 INFO [decode.py:606] Number of model parameters: 96983734
|
8 |
+
2022-06-19 12:28:42,308 INFO [aishell.py:51] About to get test cuts from data/fbank/aishell_cuts_test.jsonl.gz
|
9 |
+
2022-06-19 12:28:42,311 INFO [aishell.py:45] About to get valid cuts from data/fbank/aishell_cuts_dev.jsonl.gz
|
10 |
+
2022-06-19 12:28:45,053 INFO [decode.py:403] batch 0/?, cuts processed until now is 99
|
11 |
+
2022-06-19 12:29:04,680 INFO [decode.py:403] batch 20/?, cuts processed until now is 2264
|
12 |
+
2022-06-19 12:29:23,383 INFO [decode.py:403] batch 40/?, cuts processed until now is 4638
|
13 |
+
2022-06-19 12:29:41,461 INFO [decode.py:403] batch 60/?, cuts processed until now is 7129
|
14 |
+
2022-06-19 12:29:44,162 INFO [decode.py:420] The transcripts are stored in pruned_transducer_stateless3/exp-context-size-1/fast_beam_search/recogs-test-beam_4_max_contexts_4_max_states_8-epoch-29-avg-5-beam-4-max-contexts-4-max-states-8-use-averaged-model.txt
|
15 |
+
2022-06-19 12:29:44,482 INFO [utils.py:408] [test-beam_4_max_contexts_4_max_states_8] %WER 5.13% [5374 / 104765, 149 ins, 659 del, 4566 sub ]
|
16 |
+
2022-06-19 12:29:44,920 INFO [decode.py:437] Wrote detailed error stats to pruned_transducer_stateless3/exp-context-size-1/fast_beam_search/errs-test-beam_4_max_contexts_4_max_states_8-epoch-29-avg-5-beam-4-max-contexts-4-max-states-8-use-averaged-model.txt
|
17 |
+
2022-06-19 12:29:44,921 INFO [decode.py:454]
|
18 |
+
For test, CER of different settings are:
|
19 |
+
beam_4_max_contexts_4_max_states_8 5.13 best for test
|
20 |
+
|
21 |
+
2022-06-19 12:29:46,944 INFO [decode.py:403] batch 0/?, cuts processed until now is 111
|
22 |
+
2022-06-19 12:30:07,495 INFO [decode.py:403] batch 20/?, cuts processed until now is 2529
|
23 |
+
2022-06-19 12:30:26,986 INFO [decode.py:403] batch 40/?, cuts processed until now is 5042
|
24 |
+
2022-06-19 12:30:46,547 INFO [decode.py:403] batch 60/?, cuts processed until now is 7804
|
25 |
+
2022-06-19 12:31:05,927 INFO [decode.py:403] batch 80/?, cuts processed until now is 10338
|
26 |
+
2022-06-19 12:31:25,527 INFO [decode.py:403] batch 100/?, cuts processed until now is 13159
|
27 |
+
2022-06-19 12:31:34,494 INFO [decode.py:420] The transcripts are stored in pruned_transducer_stateless3/exp-context-size-1/fast_beam_search/recogs-dev-beam_4_max_contexts_4_max_states_8-epoch-29-avg-5-beam-4-max-contexts-4-max-states-8-use-averaged-model.txt
|
28 |
+
2022-06-19 12:31:35,005 INFO [utils.py:408] [dev-beam_4_max_contexts_4_max_states_8] %WER 4.91% [10075 / 205341, 287 ins, 1159 del, 8629 sub ]
|
29 |
+
2022-06-19 12:31:35,926 INFO [decode.py:437] Wrote detailed error stats to pruned_transducer_stateless3/exp-context-size-1/fast_beam_search/errs-dev-beam_4_max_contexts_4_max_states_8-epoch-29-avg-5-beam-4-max-contexts-4-max-states-8-use-averaged-model.txt
|
30 |
+
2022-06-19 12:31:35,927 INFO [decode.py:454]
|
31 |
+
For dev, CER of different settings are:
|
32 |
+
beam_4_max_contexts_4_max_states_8 4.91 best for dev
|
33 |
+
|
34 |
+
2022-06-19 12:31:35,944 INFO [decode.py:633] Done!
|
decoding_results/fast_beam_search/recogs-dev-beam_4_max_contexts_4_max_states_8-epoch-29-avg-5-beam-4-max-contexts-4-max-states-8-use-averaged-model.txt
ADDED
The diff for this file is too large to render.
See raw diff
|
|
decoding_results/fast_beam_search/recogs-test-beam_4_max_contexts_4_max_states_8-epoch-29-avg-5-beam-4-max-contexts-4-max-states-8-use-averaged-model.txt
ADDED
The diff for this file is too large to render.
See raw diff
|
|