whisper-small-multilingual-spoken-ner-pipeline-lora / logs /whisper-spoken-ner-small-pipe-lora.err
Quentin Meeus
add logs
e508ee7
Process rank: 0, device: cuda:0, n_gpu: 1distributed training: True, 16-bits training: True
/users/spraak/qmeeus/micromamba/envs/torch-cu121/lib/python3.10/site-packages/transformers/configuration_utils.py:508: FutureWarning: The `use_auth_token` argument is deprecated and will be removed in v5 of Transformers. Please use `token` instead.
warnings.warn(
[INFO|configuration_utils.py:737] 2024-01-08 18:44:38,532 >> loading configuration file configs/whisper_small_ner_mtl.json
[WARNING|configuration_utils.py:617] 2024-01-08 18:44:38,532 >> You are using a model of type whisper to instantiate a model of type whisper_for_slu. This is not supported for all configurations of models and can yield errors.
[INFO|configuration_utils.py:802] 2024-01-08 18:44:38,535 >> Model config WhisperSLUConfig {
"_name_or_path": "openai/whisper-small",
"activation_dropout": 0.0,
"activation_function": "gelu",
"adaptor_activation": "relu",
"adaptor_init": "constant",
"adaptor_layernorm": true,
"apply_spec_augment": false,
"architectures": [
"WhisperForConditionalGeneration"
],
"attention_dropout": 0.0,
"begin_suppress_tokens": [
220,
50257
],
"bos_token_id": 50257,
"classifier_proj_size": 256,
"crf_transition_matrix": null,
"d_model": 768,
"decoder_attention_heads": 12,
"decoder_ffn_dim": 3072,
"decoder_layerdrop": 0.0,
"decoder_layers": 12,
"decoder_start_token_id": 50258,
"dropout": 0.0,
"encoder_attention_heads": 12,
"encoder_ffn_dim": 3072,
"encoder_layerdrop": 0.0,
"encoder_layers": 12,
"eos_token_id": 50257,
"forced_decoder_ids": [
[
1,
50259
],
[
2,
50359
],
[
3,
50363
]
],
"init_std": 0.02,
"is_encoder_decoder": true,
"mask_feature_length": 10,
"mask_feature_min_masks": 0,
"mask_feature_prob": 0.0,
"mask_time_length": 10,
"mask_time_min_masks": 2,
"mask_time_prob": 0.05,
"max_length": 448,
"max_source_positions": 1500,
"max_target_positions": 448,
"median_filter_width": 7,
"model_type": "whisper_for_slu",
"num_hidden_layers": 12,
"num_mel_bins": 80,
"pad_token_id": 50257,
"scale_embedding": false,
"slu_attention_heads": 12,
"slu_dropout": 0.3,
"slu_embed_dim": 768,
"slu_ffn_dim": 2048,
"slu_focus": 1.0,
"slu_input_from": "decoder",
"slu_input_layers": [
11
],
"slu_labels": null,
"slu_layers": 2,
"slu_max_positions": null,
"slu_output_dim": 37,
"slu_pad_token_id": 1,
"slu_start_token_id": 36,
"slu_task": "named_entity_recognition",
"slu_weight": 0.2,
"suppress_tokens": [
1,
2,
7,
8,
9,
10,
14,
25,
26,
27,
28,
29,
31,
58,
59,
60,
61,
62,
63,
90,
91,
92,
93,
359,
503,
522,
542,
873,
893,
902,
918,
922,
931,
1350,
1853,
1982,
2460,
2627,
3246,
3253,
3268,
3536,
3846,
3961,
4183,
4667,
6585,
6647,
7273,
9061,
9383,
10428,
10929,
11938,
12033,
12331,
12562,
13793,
14157,
14635,
15265,
15618,
16553,
16604,
18362,
18956,
20075,
21675,
22520,
26130,
26161,
26435,
28279,
29464,
31650,
32302,
32470,
36865,
42863,
47425,
49870,
50254,
50258,
50360,
50361,
50362
],
"task": "token_classification",
"teacher": null,
"torch_dtype": "float32",
"transformers_version": "4.37.0.dev0",
"use_cache": true,
"use_crf": false,
"use_weighted_layer_sum": false,
"vocab_size": 51865
}
/users/spraak/qmeeus/micromamba/envs/torch-cu121/lib/python3.10/site-packages/transformers/models/auto/feature_extraction_auto.py:328: FutureWarning: The `use_auth_token` argument is deprecated and will be removed in v5 of Transformers. Please use `token` instead.
warnings.warn(
[INFO|feature_extraction_utils.py:535] 2024-01-08 18:44:38,556 >> loading configuration file /esat/audioslave/qmeeus/exp/whisper_slu/train/whisper-small-spoken-ner/preprocessor_config.json
[INFO|feature_extraction_utils.py:579] 2024-01-08 18:44:38,563 >> Feature extractor WhisperFeatureExtractor {
"chunk_length": 30,
"feature_extractor_type": "WhisperFeatureExtractor",
"feature_size": 80,
"hop_length": 160,
"n_fft": 400,
"n_samples": 480000,
"nb_max_frames": 3000,
"padding_side": "right",
"padding_value": 0.0,
"processor_class": "WhisperProcessor",
"return_attention_mask": false,
"sampling_rate": 16000
}
/users/spraak/qmeeus/micromamba/envs/torch-cu121/lib/python3.10/site-packages/transformers/models/auto/tokenization_auto.py:691: FutureWarning: The `use_auth_token` argument is deprecated and will be removed in v5 of Transformers. Please use `token` instead.
warnings.warn(
[INFO|tokenization_utils_base.py:2024] 2024-01-08 18:44:38,630 >> loading file vocab.json
[INFO|tokenization_utils_base.py:2024] 2024-01-08 18:44:38,630 >> loading file tokenizer.json
[INFO|tokenization_utils_base.py:2024] 2024-01-08 18:44:38,630 >> loading file merges.txt
[INFO|tokenization_utils_base.py:2024] 2024-01-08 18:44:38,630 >> loading file normalizer.json
[INFO|tokenization_utils_base.py:2024] 2024-01-08 18:44:38,630 >> loading file added_tokens.json
[INFO|tokenization_utils_base.py:2024] 2024-01-08 18:44:38,631 >> loading file special_tokens_map.json
[INFO|tokenization_utils_base.py:2024] 2024-01-08 18:44:38,631 >> loading file tokenizer_config.json
[WARNING|logging.py:314] 2024-01-08 18:44:39,435 >> Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
/users/spraak/qmeeus/micromamba/envs/torch-cu121/lib/python3.10/site-packages/transformers/modeling_utils.py:2790: FutureWarning: The `use_auth_token` argument is deprecated and will be removed in v5 of Transformers. Please use `token` instead.
warnings.warn(
[INFO|modeling_utils.py:3373] 2024-01-08 18:44:39,454 >> loading weights file /esat/audioslave/qmeeus/exp/whisper_slu/train/whisper-small-spoken-ner/model.safetensors
[INFO|configuration_utils.py:826] 2024-01-08 18:44:41,796 >> Generate config GenerationConfig {
"begin_suppress_tokens": [
220,
50257
],
"bos_token_id": 50257,
"decoder_start_token_id": 50258,
"eos_token_id": 50257,
"forced_decoder_ids": [
[
1,
50259
],
[
2,
50359
],
[
3,
50363
]
],
"max_length": 448,
"pad_token_id": 50257
}
[INFO|modeling_utils.py:4227] 2024-01-08 18:44:42,780 >> All model checkpoint weights were used when initializing WhisperSLU.
[INFO|modeling_utils.py:4235] 2024-01-08 18:44:42,780 >> All the weights of WhisperSLU were initialized from the model checkpoint at /esat/audioslave/qmeeus/exp/whisper_slu/train/whisper-small-spoken-ner.
If your task is similar to the task the model of the checkpoint was trained on, you can already use WhisperSLU for predictions without further training.
[INFO|configuration_utils.py:779] 2024-01-08 18:44:42,795 >> loading configuration file /esat/audioslave/qmeeus/exp/whisper_slu/train/whisper-small-spoken-ner/generation_config.json
[INFO|configuration_utils.py:826] 2024-01-08 18:44:42,796 >> Generate config GenerationConfig {
"alignment_heads": [
[
5,
3
],
[
5,
9
],
[
8,
0
],
[
8,
4
],
[
8,
7
],
[
8,
8
],
[
9,
0
],
[
9,
7
],
[
9,
9
],
[
10,
5
]
],
"begin_suppress_tokens": [
220,
50257
],
"bos_token_id": 50257,
"decoder_start_token_id": 50258,
"eos_token_id": 50257,
"forced_decoder_ids": [
[
1,
null
],
[
2,
50359
]
],
"is_multilingual": true,
"lang_to_id": {
"<|af|>": 50327,
"<|am|>": 50334,
"<|ar|>": 50272,
"<|as|>": 50350,
"<|az|>": 50304,
"<|ba|>": 50355,
"<|be|>": 50330,
"<|bg|>": 50292,
"<|bn|>": 50302,
"<|bo|>": 50347,
"<|br|>": 50309,
"<|bs|>": 50315,
"<|ca|>": 50270,
"<|cs|>": 50283,
"<|cy|>": 50297,
"<|da|>": 50285,
"<|de|>": 50261,
"<|el|>": 50281,
"<|en|>": 50259,
"<|es|>": 50262,
"<|et|>": 50307,
"<|eu|>": 50310,
"<|fa|>": 50300,
"<|fi|>": 50277,
"<|fo|>": 50338,
"<|fr|>": 50265,
"<|gl|>": 50319,
"<|gu|>": 50333,
"<|haw|>": 50352,
"<|ha|>": 50354,
"<|he|>": 50279,
"<|hi|>": 50276,
"<|hr|>": 50291,
"<|ht|>": 50339,
"<|hu|>": 50286,
"<|hy|>": 50312,
"<|id|>": 50275,
"<|is|>": 50311,
"<|it|>": 50274,
"<|ja|>": 50266,
"<|jw|>": 50356,
"<|ka|>": 50329,
"<|kk|>": 50316,
"<|km|>": 50323,
"<|kn|>": 50306,
"<|ko|>": 50264,
"<|la|>": 50294,
"<|lb|>": 50345,
"<|ln|>": 50353,
"<|lo|>": 50336,
"<|lt|>": 50293,
"<|lv|>": 50301,
"<|mg|>": 50349,
"<|mi|>": 50295,
"<|mk|>": 50308,
"<|ml|>": 50296,
"<|mn|>": 50314,
"<|mr|>": 50320,
"<|ms|>": 50282,
"<|mt|>": 50343,
"<|my|>": 50346,
"<|ne|>": 50313,
"<|nl|>": 50271,
"<|nn|>": 50342,
"<|no|>": 50288,
"<|oc|>": 50328,
"<|pa|>": 50321,
"<|pl|>": 50269,
"<|ps|>": 50340,
"<|pt|>": 50267,
"<|ro|>": 50284,
"<|ru|>": 50263,
"<|sa|>": 50344,
"<|sd|>": 50332,
"<|si|>": 50322,
"<|sk|>": 50298,
"<|sl|>": 50305,
"<|sn|>": 50324,
"<|so|>": 50326,
"<|sq|>": 50317,
"<|sr|>": 50303,
"<|su|>": 50357,
"<|sv|>": 50273,
"<|sw|>": 50318,
"<|ta|>": 50287,
"<|te|>": 50299,
"<|tg|>": 50331,
"<|th|>": 50289,
"<|tk|>": 50341,
"<|tl|>": 50348,
"<|tr|>": 50268,
"<|tt|>": 50351,
"<|uk|>": 50280,
"<|ur|>": 50290,
"<|uz|>": 50337,
"<|vi|>": 50278,
"<|yi|>": 50335,
"<|yo|>": 50325,
"<|zh|>": 50260
},
"max_initial_timestamp_index": 1,
"max_length": 448,
"no_timestamps_token_id": 50363,
"pad_token_id": 50257,
"return_timestamps": false,
"suppress_tokens": [
1,
2,
7,
8,
9,
10,
14,
25,
26,
27,
28,
29,
31,
58,
59,
60,
61,
62,
63,
90,
91,
92,
93,
359,
503,
522,
542,
873,
893,
902,
918,
922,
931,
1350,
1853,
1982,
2460,
2627,
3246,
3253,
3268,
3536,
3846,
3961,
4183,
4667,
6585,
6647,
7273,
9061,
9383,
10428,
10929,
11938,
12033,
12331,
12562,
13793,
14157,
14635,
15265,
15618,
16553,
16604,
18362,
18956,
20075,
21675,
22520,
26130,
26161,
26435,
28279,
29464,
31650,
32302,
32470,
36865,
42863,
47425,
49870,
50254,
50258,
50358,
50359,
50360,
50361,
50362
],
"task_to_id": {
"transcribe": 50359,
"translate": 50358
}
}
trainable params: 2,111,784 || all params: 255,250,145 || trainable%: 0.8273390011198622
[INFO|feature_extraction_utils.py:425] 2024-01-08 18:44:47,327 >> Feature extractor saved in /esat/audioslave/qmeeus/exp/whisper_slu/pipeline/whisper-small-spoken-ner-lora/preprocessor_config.json
[INFO|tokenization_utils_base.py:2432] 2024-01-08 18:44:47,357 >> tokenizer config file saved in /esat/audioslave/qmeeus/exp/whisper_slu/pipeline/whisper-small-spoken-ner-lora/tokenizer_config.json
[INFO|tokenization_utils_base.py:2441] 2024-01-08 18:44:47,358 >> Special tokens file saved in /esat/audioslave/qmeeus/exp/whisper_slu/pipeline/whisper-small-spoken-ner-lora/special_tokens_map.json
[INFO|configuration_utils.py:483] 2024-01-08 18:44:47,419 >> Configuration saved in /esat/audioslave/qmeeus/exp/whisper_slu/pipeline/whisper-small-spoken-ner-lora/config.json
[INFO|trainer.py:522] 2024-01-08 18:44:50,691 >> max_steps is given, it will override any value given in num_train_epochs
[INFO|trainer.py:571] 2024-01-08 18:44:50,691 >> Using auto half precision backend
wandb: Currently logged in as: qmeeus. Use `wandb login --relogin` to force relogin
wandb: wandb version 0.16.1 is available! To upgrade, please run:
wandb: $ pip install wandb --upgrade
wandb: Tracking run with wandb version 0.15.12
wandb: Run data is saved locally in /usr/data/condor/execute/dir_314523/whisper_slu/wandb/run-20240108_184452-35ireexg
wandb: Run `wandb offline` to turn off syncing.
wandb: Syncing run run-2024-01-08_18-44-50
wandb: ⭐️ View project at https://wandb.ai/qmeeus/Whisper%20PEFT%20Fine-Tuning
wandb: πŸš€ View run at https://wandb.ai/qmeeus/Whisper%20PEFT%20Fine-Tuning/runs/35ireexg
[INFO|trainer.py:718] 2024-01-08 18:44:53,398 >> The following columns in the training set don't have a corresponding argument in `PeftModel.forward` and have been ignored: input_length. If input_length are not expected by `PeftModel.forward`, you can safely ignore this message.
[INFO|trainer.py:1712] 2024-01-08 18:44:53,456 >> ***** Running training *****
[INFO|trainer.py:1713] 2024-01-08 18:44:53,456 >> Num examples = 71,615
[INFO|trainer.py:1714] 2024-01-08 18:44:53,456 >> Num Epochs = 9
[INFO|trainer.py:1715] 2024-01-08 18:44:53,456 >> Instantaneous batch size per device = 4
[INFO|trainer.py:1718] 2024-01-08 18:44:53,456 >> Total train batch size (w. parallel, distributed & accumulation) = 128
[INFO|trainer.py:1719] 2024-01-08 18:44:53,456 >> Gradient Accumulation steps = 32
[INFO|trainer.py:1720] 2024-01-08 18:44:53,456 >> Total optimization steps = 5,000
[INFO|trainer.py:1721] 2024-01-08 18:44:53,459 >> Number of trainable parameters = 2,111,784
[INFO|integration_utils.py:722] 2024-01-08 18:44:53,462 >> Automatic Weights & Biases logging enabled, to disable set os.environ["WANDB_DISABLED"] = "true"
[WARNING|logging.py:314] 2024-01-08 18:44:53,481 >> You're using a WhisperTokenizerFast tokenizer. Please note that with a fast tokenizer, using the `__call__` method is faster than using a method to encode the text followed by a call to the `pad` method to get a padded encoding.
[INFO|trainer.py:718] 2024-01-08 19:13:47,119 >> The following columns in the evaluation set don't have a corresponding argument in `PeftModel.forward` and have been ignored: input_length. If input_length are not expected by `PeftModel.forward`, you can safely ignore this message.
[INFO|trainer.py:2895] 2024-01-08 19:19:31,366 >> Saving model checkpoint to /esat/audioslave/qmeeus/exp/whisper_slu/pipeline/whisper-small-spoken-ner-lora/tmp-checkpoint-200
[INFO|feature_extraction_utils.py:425] 2024-01-08 19:19:31,494 >> Feature extractor saved in /esat/audioslave/qmeeus/exp/whisper_slu/pipeline/whisper-small-spoken-ner-lora/tmp-checkpoint-200/preprocessor_config.json
[INFO|trainer.py:718] 2024-01-08 19:48:40,699 >> The following columns in the evaluation set don't have a corresponding argument in `PeftModel.forward` and have been ignored: input_length. If input_length are not expected by `PeftModel.forward`, you can safely ignore this message.
[INFO|trainer.py:2895] 2024-01-08 19:54:22,629 >> Saving model checkpoint to /esat/audioslave/qmeeus/exp/whisper_slu/pipeline/whisper-small-spoken-ner-lora/tmp-checkpoint-400
[INFO|feature_extraction_utils.py:425] 2024-01-08 19:54:22,697 >> Feature extractor saved in /esat/audioslave/qmeeus/exp/whisper_slu/pipeline/whisper-small-spoken-ner-lora/tmp-checkpoint-400/preprocessor_config.json
[INFO|trainer.py:718] 2024-01-08 20:23:52,708 >> The following columns in the evaluation set don't have a corresponding argument in `PeftModel.forward` and have been ignored: input_length. If input_length are not expected by `PeftModel.forward`, you can safely ignore this message.
[INFO|trainer.py:2895] 2024-01-08 20:29:34,863 >> Saving model checkpoint to /esat/audioslave/qmeeus/exp/whisper_slu/pipeline/whisper-small-spoken-ner-lora/tmp-checkpoint-600
[INFO|feature_extraction_utils.py:425] 2024-01-08 20:29:34,923 >> Feature extractor saved in /esat/audioslave/qmeeus/exp/whisper_slu/pipeline/whisper-small-spoken-ner-lora/tmp-checkpoint-600/preprocessor_config.json
[INFO|trainer.py:718] 2024-01-08 20:57:32,183 >> The following columns in the evaluation set don't have a corresponding argument in `PeftModel.forward` and have been ignored: input_length. If input_length are not expected by `PeftModel.forward`, you can safely ignore this message.
[INFO|trainer.py:2895] 2024-01-08 21:03:16,687 >> Saving model checkpoint to /esat/audioslave/qmeeus/exp/whisper_slu/pipeline/whisper-small-spoken-ner-lora/tmp-checkpoint-800
[INFO|feature_extraction_utils.py:425] 2024-01-08 21:03:16,748 >> Feature extractor saved in /esat/audioslave/qmeeus/exp/whisper_slu/pipeline/whisper-small-spoken-ner-lora/tmp-checkpoint-800/preprocessor_config.json
[INFO|trainer.py:718] 2024-01-08 21:31:12,469 >> The following columns in the evaluation set don't have a corresponding argument in `PeftModel.forward` and have been ignored: input_length. If input_length are not expected by `PeftModel.forward`, you can safely ignore this message.
[INFO|trainer.py:2895] 2024-01-08 21:36:50,658 >> Saving model checkpoint to /esat/audioslave/qmeeus/exp/whisper_slu/pipeline/whisper-small-spoken-ner-lora/tmp-checkpoint-1000
[INFO|feature_extraction_utils.py:425] 2024-01-08 21:36:50,723 >> Feature extractor saved in /esat/audioslave/qmeeus/exp/whisper_slu/pipeline/whisper-small-spoken-ner-lora/tmp-checkpoint-1000/preprocessor_config.json
[INFO|trainer.py:718] 2024-01-08 22:04:44,620 >> The following columns in the evaluation set don't have a corresponding argument in `PeftModel.forward` and have been ignored: input_length. If input_length are not expected by `PeftModel.forward`, you can safely ignore this message.
[INFO|trainer.py:2895] 2024-01-08 22:10:25,435 >> Saving model checkpoint to /esat/audioslave/qmeeus/exp/whisper_slu/pipeline/whisper-small-spoken-ner-lora/tmp-checkpoint-1200
[INFO|feature_extraction_utils.py:425] 2024-01-08 22:10:25,503 >> Feature extractor saved in /esat/audioslave/qmeeus/exp/whisper_slu/pipeline/whisper-small-spoken-ner-lora/tmp-checkpoint-1200/preprocessor_config.json
[INFO|trainer.py:718] 2024-01-08 22:38:14,532 >> The following columns in the evaluation set don't have a corresponding argument in `PeftModel.forward` and have been ignored: input_length. If input_length are not expected by `PeftModel.forward`, you can safely ignore this message.
[INFO|trainer.py:2895] 2024-01-08 22:43:55,646 >> Saving model checkpoint to /esat/audioslave/qmeeus/exp/whisper_slu/pipeline/whisper-small-spoken-ner-lora/tmp-checkpoint-1400
[INFO|feature_extraction_utils.py:425] 2024-01-08 22:43:55,713 >> Feature extractor saved in /esat/audioslave/qmeeus/exp/whisper_slu/pipeline/whisper-small-spoken-ner-lora/tmp-checkpoint-1400/preprocessor_config.json
[INFO|trainer.py:718] 2024-01-08 23:11:49,094 >> The following columns in the evaluation set don't have a corresponding argument in `PeftModel.forward` and have been ignored: input_length. If input_length are not expected by `PeftModel.forward`, you can safely ignore this message.
[INFO|trainer.py:2895] 2024-01-08 23:17:29,789 >> Saving model checkpoint to /esat/audioslave/qmeeus/exp/whisper_slu/pipeline/whisper-small-spoken-ner-lora/tmp-checkpoint-1600
[INFO|feature_extraction_utils.py:425] 2024-01-08 23:17:29,855 >> Feature extractor saved in /esat/audioslave/qmeeus/exp/whisper_slu/pipeline/whisper-small-spoken-ner-lora/tmp-checkpoint-1600/preprocessor_config.json
[INFO|trainer.py:718] 2024-01-08 23:45:21,350 >> The following columns in the evaluation set don't have a corresponding argument in `PeftModel.forward` and have been ignored: input_length. If input_length are not expected by `PeftModel.forward`, you can safely ignore this message.
[INFO|trainer.py:2895] 2024-01-08 23:50:59,797 >> Saving model checkpoint to /esat/audioslave/qmeeus/exp/whisper_slu/pipeline/whisper-small-spoken-ner-lora/tmp-checkpoint-1800
[INFO|feature_extraction_utils.py:425] 2024-01-08 23:50:59,864 >> Feature extractor saved in /esat/audioslave/qmeeus/exp/whisper_slu/pipeline/whisper-small-spoken-ner-lora/tmp-checkpoint-1800/preprocessor_config.json
[INFO|trainer.py:718] 2024-01-09 00:18:55,674 >> The following columns in the evaluation set don't have a corresponding argument in `PeftModel.forward` and have been ignored: input_length. If input_length are not expected by `PeftModel.forward`, you can safely ignore this message.
[INFO|trainer.py:2895] 2024-01-09 00:24:38,854 >> Saving model checkpoint to /esat/audioslave/qmeeus/exp/whisper_slu/pipeline/whisper-small-spoken-ner-lora/tmp-checkpoint-2000
[INFO|feature_extraction_utils.py:425] 2024-01-09 00:24:38,925 >> Feature extractor saved in /esat/audioslave/qmeeus/exp/whisper_slu/pipeline/whisper-small-spoken-ner-lora/tmp-checkpoint-2000/preprocessor_config.json
[INFO|trainer.py:718] 2024-01-09 00:52:30,504 >> The following columns in the evaluation set don't have a corresponding argument in `PeftModel.forward` and have been ignored: input_length. If input_length are not expected by `PeftModel.forward`, you can safely ignore this message.
[INFO|trainer.py:2895] 2024-01-09 00:58:08,825 >> Saving model checkpoint to /esat/audioslave/qmeeus/exp/whisper_slu/pipeline/whisper-small-spoken-ner-lora/tmp-checkpoint-2200
[INFO|feature_extraction_utils.py:425] 2024-01-09 00:58:08,891 >> Feature extractor saved in /esat/audioslave/qmeeus/exp/whisper_slu/pipeline/whisper-small-spoken-ner-lora/tmp-checkpoint-2200/preprocessor_config.json
[INFO|trainer.py:718] 2024-01-09 01:26:03,365 >> The following columns in the evaluation set don't have a corresponding argument in `PeftModel.forward` and have been ignored: input_length. If input_length are not expected by `PeftModel.forward`, you can safely ignore this message.
[INFO|trainer.py:2895] 2024-01-09 01:31:41,568 >> Saving model checkpoint to /esat/audioslave/qmeeus/exp/whisper_slu/pipeline/whisper-small-spoken-ner-lora/tmp-checkpoint-2400
[INFO|feature_extraction_utils.py:425] 2024-01-09 01:31:41,637 >> Feature extractor saved in /esat/audioslave/qmeeus/exp/whisper_slu/pipeline/whisper-small-spoken-ner-lora/tmp-checkpoint-2400/preprocessor_config.json
[INFO|trainer.py:718] 2024-01-09 01:59:37,802 >> The following columns in the evaluation set don't have a corresponding argument in `PeftModel.forward` and have been ignored: input_length. If input_length are not expected by `PeftModel.forward`, you can safely ignore this message.
[INFO|trainer.py:2895] 2024-01-09 02:05:15,416 >> Saving model checkpoint to /esat/audioslave/qmeeus/exp/whisper_slu/pipeline/whisper-small-spoken-ner-lora/tmp-checkpoint-2600
[INFO|feature_extraction_utils.py:425] 2024-01-09 02:05:15,487 >> Feature extractor saved in /esat/audioslave/qmeeus/exp/whisper_slu/pipeline/whisper-small-spoken-ner-lora/tmp-checkpoint-2600/preprocessor_config.json
[INFO|trainer.py:718] 2024-01-09 02:33:13,316 >> The following columns in the evaluation set don't have a corresponding argument in `PeftModel.forward` and have been ignored: input_length. If input_length are not expected by `PeftModel.forward`, you can safely ignore this message.
[INFO|trainer.py:2895] 2024-01-09 02:38:52,241 >> Saving model checkpoint to /esat/audioslave/qmeeus/exp/whisper_slu/pipeline/whisper-small-spoken-ner-lora/tmp-checkpoint-2800
[INFO|feature_extraction_utils.py:425] 2024-01-09 02:38:52,309 >> Feature extractor saved in /esat/audioslave/qmeeus/exp/whisper_slu/pipeline/whisper-small-spoken-ner-lora/tmp-checkpoint-2800/preprocessor_config.json
[INFO|trainer.py:718] 2024-01-09 03:06:54,838 >> The following columns in the evaluation set don't have a corresponding argument in `PeftModel.forward` and have been ignored: input_length. If input_length are not expected by `PeftModel.forward`, you can safely ignore this message.
[INFO|trainer.py:2895] 2024-01-09 03:12:32,446 >> Saving model checkpoint to /esat/audioslave/qmeeus/exp/whisper_slu/pipeline/whisper-small-spoken-ner-lora/tmp-checkpoint-3000
[INFO|feature_extraction_utils.py:425] 2024-01-09 03:12:32,518 >> Feature extractor saved in /esat/audioslave/qmeeus/exp/whisper_slu/pipeline/whisper-small-spoken-ner-lora/tmp-checkpoint-3000/preprocessor_config.json
[INFO|trainer.py:718] 2024-01-09 03:40:35,202 >> The following columns in the evaluation set don't have a corresponding argument in `PeftModel.forward` and have been ignored: input_length. If input_length are not expected by `PeftModel.forward`, you can safely ignore this message.
[INFO|trainer.py:2895] 2024-01-09 03:46:14,094 >> Saving model checkpoint to /esat/audioslave/qmeeus/exp/whisper_slu/pipeline/whisper-small-spoken-ner-lora/tmp-checkpoint-3200
[INFO|feature_extraction_utils.py:425] 2024-01-09 03:46:14,164 >> Feature extractor saved in /esat/audioslave/qmeeus/exp/whisper_slu/pipeline/whisper-small-spoken-ner-lora/tmp-checkpoint-3200/preprocessor_config.json
[INFO|trainer.py:718] 2024-01-09 04:14:09,998 >> The following columns in the evaluation set don't have a corresponding argument in `PeftModel.forward` and have been ignored: input_length. If input_length are not expected by `PeftModel.forward`, you can safely ignore this message.
[INFO|trainer.py:2895] 2024-01-09 04:19:47,911 >> Saving model checkpoint to /esat/audioslave/qmeeus/exp/whisper_slu/pipeline/whisper-small-spoken-ner-lora/tmp-checkpoint-3400
[INFO|feature_extraction_utils.py:425] 2024-01-09 04:19:47,978 >> Feature extractor saved in /esat/audioslave/qmeeus/exp/whisper_slu/pipeline/whisper-small-spoken-ner-lora/tmp-checkpoint-3400/preprocessor_config.json
[INFO|trainer.py:718] 2024-01-09 04:47:50,188 >> The following columns in the evaluation set don't have a corresponding argument in `PeftModel.forward` and have been ignored: input_length. If input_length are not expected by `PeftModel.forward`, you can safely ignore this message.
[INFO|trainer.py:2895] 2024-01-09 04:53:29,921 >> Saving model checkpoint to /esat/audioslave/qmeeus/exp/whisper_slu/pipeline/whisper-small-spoken-ner-lora/tmp-checkpoint-3600
[INFO|feature_extraction_utils.py:425] 2024-01-09 04:53:29,988 >> Feature extractor saved in /esat/audioslave/qmeeus/exp/whisper_slu/pipeline/whisper-small-spoken-ner-lora/tmp-checkpoint-3600/preprocessor_config.json
[INFO|trainer.py:718] 2024-01-09 05:21:33,159 >> The following columns in the evaluation set don't have a corresponding argument in `PeftModel.forward` and have been ignored: input_length. If input_length are not expected by `PeftModel.forward`, you can safely ignore this message.
[INFO|trainer.py:2895] 2024-01-09 05:27:11,558 >> Saving model checkpoint to /esat/audioslave/qmeeus/exp/whisper_slu/pipeline/whisper-small-spoken-ner-lora/tmp-checkpoint-3800
[INFO|feature_extraction_utils.py:425] 2024-01-09 05:27:11,628 >> Feature extractor saved in /esat/audioslave/qmeeus/exp/whisper_slu/pipeline/whisper-small-spoken-ner-lora/tmp-checkpoint-3800/preprocessor_config.json
[INFO|trainer.py:718] 2024-01-09 05:55:12,769 >> The following columns in the evaluation set don't have a corresponding argument in `PeftModel.forward` and have been ignored: input_length. If input_length are not expected by `PeftModel.forward`, you can safely ignore this message.
[INFO|trainer.py:2895] 2024-01-09 06:00:50,862 >> Saving model checkpoint to /esat/audioslave/qmeeus/exp/whisper_slu/pipeline/whisper-small-spoken-ner-lora/tmp-checkpoint-4000
[INFO|feature_extraction_utils.py:425] 2024-01-09 06:00:50,923 >> Feature extractor saved in /esat/audioslave/qmeeus/exp/whisper_slu/pipeline/whisper-small-spoken-ner-lora/tmp-checkpoint-4000/preprocessor_config.json
[INFO|trainer.py:718] 2024-01-09 06:28:50,219 >> The following columns in the evaluation set don't have a corresponding argument in `PeftModel.forward` and have been ignored: input_length. If input_length are not expected by `PeftModel.forward`, you can safely ignore this message.
[INFO|trainer.py:2895] 2024-01-09 06:34:27,483 >> Saving model checkpoint to /esat/audioslave/qmeeus/exp/whisper_slu/pipeline/whisper-small-spoken-ner-lora/tmp-checkpoint-4200
[INFO|feature_extraction_utils.py:425] 2024-01-09 06:34:27,548 >> Feature extractor saved in /esat/audioslave/qmeeus/exp/whisper_slu/pipeline/whisper-small-spoken-ner-lora/tmp-checkpoint-4200/preprocessor_config.json
[INFO|trainer.py:718] 2024-01-09 07:02:24,846 >> The following columns in the evaluation set don't have a corresponding argument in `PeftModel.forward` and have been ignored: input_length. If input_length are not expected by `PeftModel.forward`, you can safely ignore this message.
[INFO|trainer.py:2895] 2024-01-09 07:08:04,451 >> Saving model checkpoint to /esat/audioslave/qmeeus/exp/whisper_slu/pipeline/whisper-small-spoken-ner-lora/tmp-checkpoint-4400
[INFO|feature_extraction_utils.py:425] 2024-01-09 07:08:04,518 >> Feature extractor saved in /esat/audioslave/qmeeus/exp/whisper_slu/pipeline/whisper-small-spoken-ner-lora/tmp-checkpoint-4400/preprocessor_config.json
[INFO|trainer.py:718] 2024-01-09 07:36:02,929 >> The following columns in the evaluation set don't have a corresponding argument in `PeftModel.forward` and have been ignored: input_length. If input_length are not expected by `PeftModel.forward`, you can safely ignore this message.
[INFO|trainer.py:2895] 2024-01-09 07:41:42,554 >> Saving model checkpoint to /esat/audioslave/qmeeus/exp/whisper_slu/pipeline/whisper-small-spoken-ner-lora/tmp-checkpoint-4600
[INFO|feature_extraction_utils.py:425] 2024-01-09 07:41:42,623 >> Feature extractor saved in /esat/audioslave/qmeeus/exp/whisper_slu/pipeline/whisper-small-spoken-ner-lora/tmp-checkpoint-4600/preprocessor_config.json
[INFO|trainer.py:718] 2024-01-09 08:09:40,000 >> The following columns in the evaluation set don't have a corresponding argument in `PeftModel.forward` and have been ignored: input_length. If input_length are not expected by `PeftModel.forward`, you can safely ignore this message.
[INFO|trainer.py:2895] 2024-01-09 08:15:17,334 >> Saving model checkpoint to /esat/audioslave/qmeeus/exp/whisper_slu/pipeline/whisper-small-spoken-ner-lora/tmp-checkpoint-4800
[INFO|feature_extraction_utils.py:425] 2024-01-09 08:15:17,402 >> Feature extractor saved in /esat/audioslave/qmeeus/exp/whisper_slu/pipeline/whisper-small-spoken-ner-lora/tmp-checkpoint-4800/preprocessor_config.json
[INFO|trainer.py:718] 2024-01-09 08:43:18,208 >> The following columns in the evaluation set don't have a corresponding argument in `PeftModel.forward` and have been ignored: input_length. If input_length are not expected by `PeftModel.forward`, you can safely ignore this message.
[INFO|trainer.py:2895] 2024-01-09 08:48:55,880 >> Saving model checkpoint to /esat/audioslave/qmeeus/exp/whisper_slu/pipeline/whisper-small-spoken-ner-lora/tmp-checkpoint-5000
[INFO|feature_extraction_utils.py:425] 2024-01-09 08:48:55,951 >> Feature extractor saved in /esat/audioslave/qmeeus/exp/whisper_slu/pipeline/whisper-small-spoken-ner-lora/tmp-checkpoint-5000/preprocessor_config.json
[INFO|trainer.py:1953] 2024-01-09 08:48:56,055 >>
Training completed. Do not forget to share your model on huggingface.co/models =)
[INFO|trainer.py:2895] 2024-01-09 08:48:56,060 >> Saving model checkpoint to /esat/audioslave/qmeeus/exp/whisper_slu/pipeline/whisper-small-spoken-ner-lora
[INFO|feature_extraction_utils.py:425] 2024-01-09 08:48:56,146 >> Feature extractor saved in /esat/audioslave/qmeeus/exp/whisper_slu/pipeline/whisper-small-spoken-ner-lora/preprocessor_config.json
[INFO|trainer.py:718] 2024-01-09 08:48:56,152 >> The following columns in the evaluation set don't have a corresponding argument in `PeftModel.forward` and have been ignored: input_length. If input_length are not expected by `PeftModel.forward`, you can safely ignore this message.
wandb: Waiting for W&B process to finish... (success).
wandb:
wandb: Run history:
wandb: eval/f1_score β–β–β–„β–…β–„β–…β–…β–…β–…β–†β–†β–†β–‡β–‡β–‡β–‡β–†β–ˆβ–ˆβ–‡β–‡β–ˆβ–ˆβ–ˆβ–ˆβ–ˆ
wandb: eval/label_f1 β–†β–†β–„β–ƒβ–β–…β–„β–‚β–„β–ˆβ–†β–…β–†β–…β–†β–„β–„β–…β–†β–†β–†β–†β–ƒβ–„β–„β–„
wandb: eval/loss β–ˆβ–…β–„β–ƒβ–ƒβ–‚β–‚β–‚β–‚β–‚β–β–β–β–β–β–β–β–β–β–β–β–β–β–β–β–
wandb: eval/runtime β–ˆβ–†β–†β–ˆβ–‚β–„β–…β–„β–‚β–‡β–‚β–‚β–β–ƒβ–β–ƒβ–‚β–ƒβ–‚β–‚β–β–ƒβ–ƒβ–β–β–ƒ
wandb: eval/samples_per_second β–β–ƒβ–ƒβ–β–‡β–…β–„β–…β–‡β–‚β–‡β–‡β–ˆβ–†β–ˆβ–†β–‡β–†β–‡β–‡β–ˆβ–†β–†β–ˆβ–ˆβ–†
wandb: eval/steps_per_second β–β–„β–ƒβ–β–‡β–…β–„β–…β–†β–‚β–†β–‡β–‡β–†β–‡β–†β–‡β–…β–†β–‡β–ˆβ–…β–…β–ˆβ–‡β–…
wandb: eval/wer β–ˆβ–…β–†β–…β–‚β–‚β–‚β–ƒβ–β–ƒβ–β–β–β–ƒβ–ƒβ–‚β–ƒβ–ƒβ–ƒβ–ƒβ–ƒβ–ƒβ–ƒβ–ƒβ–ƒβ–ƒ
wandb: train/epoch β–β–β–β–β–‚β–‚β–‚β–‚β–‚β–ƒβ–ƒβ–ƒβ–ƒβ–ƒβ–„β–„β–„β–„β–„β–…β–…β–…β–…β–…β–…β–†β–†β–†β–†β–†β–‡β–‡β–‡β–‡β–‡β–‡β–ˆβ–ˆβ–ˆβ–ˆ
wandb: train/global_step β–β–β–β–β–‚β–‚β–‚β–‚β–‚β–ƒβ–ƒβ–ƒβ–ƒβ–ƒβ–„β–„β–„β–„β–„β–…β–…β–…β–…β–…β–…β–†β–†β–†β–†β–†β–‡β–‡β–‡β–‡β–‡β–‡β–ˆβ–ˆβ–ˆβ–ˆ
wandb: train/learning_rate β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‡β–‡β–‡β–‡β–‡β–†β–†β–†β–…β–…β–…β–…β–„β–„β–„β–„β–ƒβ–ƒβ–ƒβ–ƒβ–‚β–‚β–‚β–‚β–‚β–β–β–β–β–β–β–
wandb: train/loss β–ˆβ–ƒβ–ƒβ–ƒβ–‚β–‚β–‚β–‚β–‚β–‚β–‚β–‚β–‚β–‚β–β–‚β–‚β–β–β–β–β–β–β–β–β–β–β–β–β–β–β–β–β–β–β–β–β–β–β–
wandb: train/total_flos ▁
wandb: train/train_loss ▁
wandb: train/train_runtime ▁
wandb: train/train_samples_per_second ▁
wandb: train/train_steps_per_second ▁
wandb:
wandb: Run summary:
wandb: eval/f1_score 0.6872
wandb: eval/label_f1 0.83254
wandb: eval/loss 0.22641
wandb: eval/runtime 339.3736
wandb: eval/samples_per_second 2.947
wandb: eval/steps_per_second 0.368
wandb: eval/wer 0.098
wandb: train/epoch 8.94
wandb: train/global_step 5000
wandb: train/learning_rate 0.0
wandb: train/loss 0.1961
wandb: train/total_flos 1.9683074514013055e+20
wandb: train/train_loss 0.21677
wandb: train/train_runtime 50642.5955
wandb: train/train_samples_per_second 12.638
wandb: train/train_steps_per_second 0.099
wandb:
wandb: πŸš€ View run run-2024-01-08_18-44-50 at: https://wandb.ai/qmeeus/Whisper%20PEFT%20Fine-Tuning/runs/35ireexg
wandb: ️⚑ View job at https://wandb.ai/qmeeus/Whisper%20PEFT%20Fine-Tuning/jobs/QXJ0aWZhY3RDb2xsZWN0aW9uOjEyODM1Nzc0OA==/version_details/v2
wandb: Synced 5 W&B file(s), 0 media file(s), 2 artifact file(s) and 0 other file(s)
wandb: Find logs at: ./wandb/run-20240108_184452-35ireexg/logs