Jeckmu's picture
Upload folder using huggingface_hub
d487c87 verified
[INFO|2025-02-10 15:14:00] configuration_utils.py:695 >> loading configuration file config.json from cache at /home/intern01/.cache/huggingface/hub/models--Qwen--Qwen2-VL-2B-Instruct-AWQ/snapshots/4f6ea6d22fcf0f8c1ed64d1d2a3d722d4d7bbcea/config.json
[INFO|2025-02-10 15:14:00] configuration_utils.py:762 >> Model config Qwen2VLConfig {
"_name_or_path": "Qwen/Qwen2-VL-2B-Instruct-AWQ",
"architectures": [
"Qwen2VLForConditionalGeneration"
],
"attention_dropout": 0.0,
"bos_token_id": 151643,
"eos_token_id": 151645,
"hidden_act": "silu",
"hidden_size": 1536,
"image_token_id": 151655,
"initializer_range": 0.02,
"intermediate_size": 8960,
"max_position_embeddings": 32768,
"max_window_layers": 28,
"model_type": "qwen2_vl",
"num_attention_heads": 12,
"num_hidden_layers": 28,
"num_key_value_heads": 2,
"quantization_config": {
"bits": 4,
"group_size": 128,
"modules_to_not_convert": [
"visual"
],
"quant_method": "awq",
"version": "gemm",
"zero_point": true
},
"rms_norm_eps": 1e-06,
"rope_scaling": {
"mrope_section": [
16,
24,
24
],
"rope_type": "default",
"type": "default"
},
"rope_theta": 1000000.0,
"sliding_window": 32768,
"tie_word_embeddings": true,
"torch_dtype": "float16",
"transformers_version": "4.47.1",
"use_cache": true,
"use_sliding_window": false,
"video_token_id": 151656,
"vision_config": {
"hidden_size": 1536,
"in_chans": 3,
"model_type": "qwen2_vl",
"spatial_patch_size": 14
},
"vision_end_token_id": 151653,
"vision_start_token_id": 151652,
"vision_token_id": 151654,
"vocab_size": 151936
}
[INFO|2025-02-10 15:14:01] tokenization_utils_base.py:2030 >> loading file vocab.json from cache at /home/intern01/.cache/huggingface/hub/models--Qwen--Qwen2-VL-2B-Instruct-AWQ/snapshots/4f6ea6d22fcf0f8c1ed64d1d2a3d722d4d7bbcea/vocab.json
[INFO|2025-02-10 15:14:01] tokenization_utils_base.py:2030 >> loading file merges.txt from cache at /home/intern01/.cache/huggingface/hub/models--Qwen--Qwen2-VL-2B-Instruct-AWQ/snapshots/4f6ea6d22fcf0f8c1ed64d1d2a3d722d4d7bbcea/merges.txt
[INFO|2025-02-10 15:14:01] tokenization_utils_base.py:2030 >> loading file tokenizer.json from cache at /home/intern01/.cache/huggingface/hub/models--Qwen--Qwen2-VL-2B-Instruct-AWQ/snapshots/4f6ea6d22fcf0f8c1ed64d1d2a3d722d4d7bbcea/tokenizer.json
[INFO|2025-02-10 15:14:01] tokenization_utils_base.py:2030 >> loading file added_tokens.json from cache at /home/intern01/.cache/huggingface/hub/models--Qwen--Qwen2-VL-2B-Instruct-AWQ/snapshots/4f6ea6d22fcf0f8c1ed64d1d2a3d722d4d7bbcea/added_tokens.json
[INFO|2025-02-10 15:14:01] tokenization_utils_base.py:2030 >> loading file special_tokens_map.json from cache at /home/intern01/.cache/huggingface/hub/models--Qwen--Qwen2-VL-2B-Instruct-AWQ/snapshots/4f6ea6d22fcf0f8c1ed64d1d2a3d722d4d7bbcea/special_tokens_map.json
[INFO|2025-02-10 15:14:01] tokenization_utils_base.py:2030 >> loading file tokenizer_config.json from cache at /home/intern01/.cache/huggingface/hub/models--Qwen--Qwen2-VL-2B-Instruct-AWQ/snapshots/4f6ea6d22fcf0f8c1ed64d1d2a3d722d4d7bbcea/tokenizer_config.json
[INFO|2025-02-10 15:14:01] tokenization_utils_base.py:2030 >> loading file chat_template.jinja from cache at None
[INFO|2025-02-10 15:14:01] tokenization_utils_base.py:2300 >> Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
[INFO|2025-02-10 15:14:02] image_processing_base.py:378 >> loading configuration file preprocessor_config.json from cache at /home/intern01/.cache/huggingface/hub/models--Qwen--Qwen2-VL-2B-Instruct-AWQ/snapshots/4f6ea6d22fcf0f8c1ed64d1d2a3d722d4d7bbcea/preprocessor_config.json
[INFO|2025-02-10 15:14:02] image_processing_base.py:378 >> loading configuration file preprocessor_config.json from cache at /home/intern01/.cache/huggingface/hub/models--Qwen--Qwen2-VL-2B-Instruct-AWQ/snapshots/4f6ea6d22fcf0f8c1ed64d1d2a3d722d4d7bbcea/preprocessor_config.json
[INFO|2025-02-10 15:14:02] image_processing_base.py:432 >> Image processor Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 1003520,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"max_pixels": 1003520,
"min_pixels": 3136
},
"temporal_patch_size": 2,
"vision_token_id": 151654
}
[INFO|2025-02-10 15:14:02] tokenization_utils_base.py:2030 >> loading file vocab.json from cache at /home/intern01/.cache/huggingface/hub/models--Qwen--Qwen2-VL-2B-Instruct-AWQ/snapshots/4f6ea6d22fcf0f8c1ed64d1d2a3d722d4d7bbcea/vocab.json
[INFO|2025-02-10 15:14:02] tokenization_utils_base.py:2030 >> loading file merges.txt from cache at /home/intern01/.cache/huggingface/hub/models--Qwen--Qwen2-VL-2B-Instruct-AWQ/snapshots/4f6ea6d22fcf0f8c1ed64d1d2a3d722d4d7bbcea/merges.txt
[INFO|2025-02-10 15:14:02] tokenization_utils_base.py:2030 >> loading file tokenizer.json from cache at /home/intern01/.cache/huggingface/hub/models--Qwen--Qwen2-VL-2B-Instruct-AWQ/snapshots/4f6ea6d22fcf0f8c1ed64d1d2a3d722d4d7bbcea/tokenizer.json
[INFO|2025-02-10 15:14:02] tokenization_utils_base.py:2030 >> loading file added_tokens.json from cache at /home/intern01/.cache/huggingface/hub/models--Qwen--Qwen2-VL-2B-Instruct-AWQ/snapshots/4f6ea6d22fcf0f8c1ed64d1d2a3d722d4d7bbcea/added_tokens.json
[INFO|2025-02-10 15:14:02] tokenization_utils_base.py:2030 >> loading file special_tokens_map.json from cache at /home/intern01/.cache/huggingface/hub/models--Qwen--Qwen2-VL-2B-Instruct-AWQ/snapshots/4f6ea6d22fcf0f8c1ed64d1d2a3d722d4d7bbcea/special_tokens_map.json
[INFO|2025-02-10 15:14:02] tokenization_utils_base.py:2030 >> loading file tokenizer_config.json from cache at /home/intern01/.cache/huggingface/hub/models--Qwen--Qwen2-VL-2B-Instruct-AWQ/snapshots/4f6ea6d22fcf0f8c1ed64d1d2a3d722d4d7bbcea/tokenizer_config.json
[INFO|2025-02-10 15:14:02] tokenization_utils_base.py:2030 >> loading file chat_template.jinja from cache at None
[INFO|2025-02-10 15:14:02] tokenization_utils_base.py:2300 >> Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
[INFO|2025-02-10 15:14:03] processing_utils.py:780 >> Processor Qwen2VLProcessor:
- image_processor: Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 1003520,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"max_pixels": 1003520,
"min_pixels": 3136
},
"temporal_patch_size": 2,
"vision_token_id": 151654
}
- tokenizer: Qwen2TokenizerFast(name_or_path='Qwen/Qwen2-VL-2B-Instruct-AWQ', vocab_size=151643, model_max_length=32768, is_fast=True, padding_side='left', truncation_side='right', special_tokens={'eos_token': '<|im_end|>', 'pad_token': '<|endoftext|>', 'additional_special_tokens': ['<|im_start|>', '<|im_end|>', '<|object_ref_start|>', '<|object_ref_end|>', '<|box_start|>', '<|box_end|>', '<|quad_start|>', '<|quad_end|>', '<|vision_start|>', '<|vision_end|>', '<|vision_pad|>', '<|image_pad|>', '<|video_pad|>']}, clean_up_tokenization_spaces=False, added_tokens_decoder={
151643: AddedToken("<|endoftext|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151644: AddedToken("<|im_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151645: AddedToken("<|im_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151646: AddedToken("<|object_ref_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151647: AddedToken("<|object_ref_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151648: AddedToken("<|box_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151649: AddedToken("<|box_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151650: AddedToken("<|quad_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151651: AddedToken("<|quad_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151652: AddedToken("<|vision_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151653: AddedToken("<|vision_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151654: AddedToken("<|vision_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151655: AddedToken("<|image_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151656: AddedToken("<|video_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
}
)
{
"processor_class": "Qwen2VLProcessor"
}
[INFO|2025-02-10 15:14:03] logging.py:157 >> Add <|im_end|> to stop words.
[INFO|2025-02-10 15:14:03] logging.py:157 >> Loading dataset qwen2_vl_dora.json...
[INFO|2025-02-10 15:14:17] configuration_utils.py:695 >> loading configuration file config.json from cache at /home/intern01/.cache/huggingface/hub/models--Qwen--Qwen2-VL-2B-Instruct-AWQ/snapshots/4f6ea6d22fcf0f8c1ed64d1d2a3d722d4d7bbcea/config.json
[INFO|2025-02-10 15:14:17] configuration_utils.py:762 >> Model config Qwen2VLConfig {
"_name_or_path": "Qwen/Qwen2-VL-2B-Instruct-AWQ",
"architectures": [
"Qwen2VLForConditionalGeneration"
],
"attention_dropout": 0.0,
"bos_token_id": 151643,
"eos_token_id": 151645,
"hidden_act": "silu",
"hidden_size": 1536,
"image_token_id": 151655,
"initializer_range": 0.02,
"intermediate_size": 8960,
"max_position_embeddings": 32768,
"max_window_layers": 28,
"model_type": "qwen2_vl",
"num_attention_heads": 12,
"num_hidden_layers": 28,
"num_key_value_heads": 2,
"quantization_config": {
"bits": 4,
"group_size": 128,
"modules_to_not_convert": [
"visual"
],
"quant_method": "awq",
"version": "gemm",
"zero_point": true
},
"rms_norm_eps": 1e-06,
"rope_scaling": {
"mrope_section": [
16,
24,
24
],
"rope_type": "default",
"type": "default"
},
"rope_theta": 1000000.0,
"sliding_window": 32768,
"tie_word_embeddings": true,
"torch_dtype": "float16",
"transformers_version": "4.47.1",
"use_cache": true,
"use_sliding_window": false,
"video_token_id": 151656,
"vision_config": {
"hidden_size": 1536,
"in_chans": 3,
"model_type": "qwen2_vl",
"spatial_patch_size": 14
},
"vision_end_token_id": 151653,
"vision_start_token_id": 151652,
"vision_token_id": 151654,
"vocab_size": 151936
}
[INFO|2025-02-10 15:14:17] logging.py:157 >> Loading 4-bit AWQ-quantized model.
[INFO|2025-02-10 15:16:00] modeling_utils.py:3953 >> loading weights file model.safetensors from cache at /home/intern01/.cache/huggingface/hub/models--Qwen--Qwen2-VL-2B-Instruct-AWQ/snapshots/4f6ea6d22fcf0f8c1ed64d1d2a3d722d4d7bbcea/model.safetensors
[INFO|2025-02-10 15:16:00] modeling_utils.py:1641 >> Instantiating Qwen2VLForConditionalGeneration model under default dtype torch.float16.
[INFO|2025-02-10 15:16:00] configuration_utils.py:1140 >> Generate config GenerationConfig {
"bos_token_id": 151643,
"eos_token_id": 151645
}
[INFO|2025-02-10 15:16:00] modeling_utils.py:1641 >> Instantiating Qwen2VisionTransformerPretrainedModel model under default dtype torch.float16.
[WARNING|2025-02-10 15:16:00] logging.py:328 >> `Qwen2VLRotaryEmbedding` can now be fully parameterized by passing the model config through the `config` argument. All other arguments will be removed in v4.46
[INFO|2025-02-10 15:16:02] modeling_utils.py:4849 >> All model checkpoint weights were used when initializing Qwen2VLForConditionalGeneration.
[INFO|2025-02-10 15:16:02] modeling_utils.py:4857 >> All the weights of Qwen2VLForConditionalGeneration were initialized from the model checkpoint at Qwen/Qwen2-VL-2B-Instruct-AWQ.
If your task is similar to the task the model of the checkpoint was trained on, you can already use Qwen2VLForConditionalGeneration for predictions without further training.
[INFO|2025-02-10 15:16:02] configuration_utils.py:1095 >> loading configuration file generation_config.json from cache at /home/intern01/.cache/huggingface/hub/models--Qwen--Qwen2-VL-2B-Instruct-AWQ/snapshots/4f6ea6d22fcf0f8c1ed64d1d2a3d722d4d7bbcea/generation_config.json
[INFO|2025-02-10 15:16:02] configuration_utils.py:1140 >> Generate config GenerationConfig {
"bos_token_id": 151643,
"do_sample": true,
"eos_token_id": [
151645,
151643
],
"pad_token_id": 151643,
"temperature": 0.01,
"top_k": 1,
"top_p": 0.001
}
[INFO|2025-02-10 15:16:02] logging.py:157 >> Gradient checkpointing enabled.
[INFO|2025-02-10 15:16:02] logging.py:157 >> Casting multimodal projector outputs in torch.float16.
[INFO|2025-02-10 15:16:02] logging.py:157 >> Using torch SDPA for faster training and inference.
[INFO|2025-02-10 15:16:02] logging.py:157 >> Upcasting trainable params to float32.
[INFO|2025-02-10 15:16:02] logging.py:157 >> Fine-tuning method: LoRA
[INFO|2025-02-10 15:16:02] logging.py:157 >> Found linear modules: q_proj,v_proj,k_proj,o_proj,up_proj,down_proj,gate_proj
[INFO|2025-02-10 15:16:02] logging.py:157 >> Set vision model not trainable: ['visual.patch_embed', 'visual.blocks'].
[INFO|2025-02-10 15:16:03] logging.py:157 >> trainable params: 9,232,384 || all params: 907,964,928 || trainable%: 1.0168
[INFO|2025-02-10 15:16:03] trainer.py:734 >> Using auto half precision backend
[INFO|2025-02-10 15:16:03] trainer.py:2362 >> ***** Running training *****
[INFO|2025-02-10 15:16:03] trainer.py:2363 >> Num examples = 565
[INFO|2025-02-10 15:16:03] trainer.py:2364 >> Num Epochs = 3
[INFO|2025-02-10 15:16:03] trainer.py:2365 >> Instantaneous batch size per device = 2
[INFO|2025-02-10 15:16:03] trainer.py:2368 >> Total train batch size (w. parallel, distributed & accumulation) = 16
[INFO|2025-02-10 15:16:03] trainer.py:2369 >> Gradient Accumulation steps = 8
[INFO|2025-02-10 15:16:03] trainer.py:2370 >> Total optimization steps = 105
[INFO|2025-02-10 15:16:03] trainer.py:2371 >> Number of trainable parameters = 9,232,384
[INFO|2025-02-10 15:18:30] logging.py:157 >> {'loss': 4.3219, 'learning_rate': 4.9721e-05, 'epoch': 0.14, 'throughput': 610.23}
[INFO|2025-02-10 15:20:41] logging.py:157 >> {'loss': 2.7795, 'learning_rate': 4.8889e-05, 'epoch': 0.28, 'throughput': 625.65}
[INFO|2025-02-10 15:22:49] logging.py:157 >> {'loss': 3.1788, 'learning_rate': 4.7839e-05, 'epoch': 0.42, 'throughput': 631.57}
[INFO|2025-02-10 15:25:25] logging.py:157 >> {'loss': 2.8887, 'learning_rate': 4.6068e-05, 'epoch': 0.57, 'throughput': 630.67}
[INFO|2025-02-10 15:27:48] logging.py:157 >> {'loss': 2.5454, 'learning_rate': 4.3827e-05, 'epoch': 0.71, 'throughput': 631.72}
[INFO|2025-02-10 15:30:11] logging.py:157 >> {'loss': 2.2010, 'learning_rate': 4.1165e-05, 'epoch': 0.85, 'throughput': 632.41}
[INFO|2025-02-10 15:32:29] logging.py:157 >> {'loss': 1.3345, 'learning_rate': 3.8142e-05, 'epoch': 0.99, 'throughput': 630.92}
[INFO|2025-02-10 15:34:37] logging.py:157 >> {'loss': 0.9507, 'learning_rate': 3.4826e-05, 'epoch': 1.11, 'throughput': 631.24}
[INFO|2025-02-10 15:37:03] logging.py:157 >> {'loss': 0.6257, 'learning_rate': 3.1290e-05, 'epoch': 1.25, 'throughput': 631.56}
[INFO|2025-02-10 15:39:22] logging.py:157 >> {'loss': 1.9766, 'learning_rate': 2.7613e-05, 'epoch': 1.40, 'throughput': 629.20}
[INFO|2025-02-10 15:39:22] trainer.py:3887 >> Saving model checkpoint to saves/Qwen2-VL-2B-Instruct-AWQ/lora/250210_Abroad_LoRA_2B_AWQ/checkpoint-50
[INFO|2025-02-10 15:39:22] configuration_utils.py:695 >> loading configuration file config.json from cache at /home/intern01/.cache/huggingface/hub/models--Qwen--Qwen2-VL-2B-Instruct-AWQ/snapshots/4f6ea6d22fcf0f8c1ed64d1d2a3d722d4d7bbcea/config.json
[INFO|2025-02-10 15:39:22] configuration_utils.py:762 >> Model config Qwen2VLConfig {
"_name_or_path": "Qwen/Qwen2-VL-2B-Instruct-319",
"architectures": [
"Qwen2VLForConditionalGeneration"
],
"attention_dropout": 0.0,
"bos_token_id": 151643,
"eos_token_id": 151645,
"hidden_act": "silu",
"hidden_size": 1536,
"image_token_id": 151655,
"initializer_range": 0.02,
"intermediate_size": 8960,
"max_position_embeddings": 32768,
"max_window_layers": 28,
"model_type": "qwen2_vl",
"num_attention_heads": 12,
"num_hidden_layers": 28,
"num_key_value_heads": 2,
"quantization_config": {
"bits": 4,
"group_size": 128,
"modules_to_not_convert": [
"visual"
],
"quant_method": "awq",
"version": "gemm",
"zero_point": true
},
"rms_norm_eps": 1e-06,
"rope_scaling": {
"mrope_section": [
16,
24,
24
],
"rope_type": "default",
"type": "default"
},
"rope_theta": 1000000.0,
"sliding_window": 32768,
"tie_word_embeddings": true,
"torch_dtype": "float16",
"transformers_version": "4.47.1",
"use_cache": true,
"use_sliding_window": false,
"video_token_id": 151656,
"vision_config": {
"hidden_size": 1536,
"in_chans": 3,
"model_type": "qwen2_vl",
"spatial_patch_size": 14
},
"vision_end_token_id": 151653,
"vision_start_token_id": 151652,
"vision_token_id": 151654,
"vocab_size": 151936
}
[INFO|2025-02-10 15:39:22] tokenization_utils_base.py:2485 >> tokenizer config file saved in saves/Qwen2-VL-2B-Instruct-AWQ/lora/250210_Abroad_LoRA_2B_AWQ/checkpoint-50/tokenizer_config.json
[INFO|2025-02-10 15:39:22] tokenization_utils_base.py:2494 >> Special tokens file saved in saves/Qwen2-VL-2B-Instruct-AWQ/lora/250210_Abroad_LoRA_2B_AWQ/checkpoint-50/special_tokens_map.json
[INFO|2025-02-10 15:39:23] image_processing_base.py:261 >> Image processor saved in saves/Qwen2-VL-2B-Instruct-AWQ/lora/250210_Abroad_LoRA_2B_AWQ/checkpoint-50/preprocessor_config.json
[INFO|2025-02-10 15:39:23] tokenization_utils_base.py:2485 >> tokenizer config file saved in saves/Qwen2-VL-2B-Instruct-AWQ/lora/250210_Abroad_LoRA_2B_AWQ/checkpoint-50/tokenizer_config.json
[INFO|2025-02-10 15:39:23] tokenization_utils_base.py:2494 >> Special tokens file saved in saves/Qwen2-VL-2B-Instruct-AWQ/lora/250210_Abroad_LoRA_2B_AWQ/checkpoint-50/special_tokens_map.json
[INFO|2025-02-10 15:39:23] processing_utils.py:546 >> chat template saved in saves/Qwen2-VL-2B-Instruct-AWQ/lora/250210_Abroad_LoRA_2B_AWQ/checkpoint-50/chat_template.json
[INFO|2025-02-10 15:41:43] logging.py:157 >> {'loss': 1.0377, 'learning_rate': 2.3878e-05, 'epoch': 1.54, 'throughput': 629.38}
[INFO|2025-02-10 15:44:11] logging.py:157 >> {'loss': 1.5486, 'learning_rate': 2.0169e-05, 'epoch': 1.68, 'throughput': 627.11}
[INFO|2025-02-10 15:46:23] logging.py:157 >> {'loss': 0.7542, 'learning_rate': 1.6567e-05, 'epoch': 1.82, 'throughput': 628.22}
[INFO|2025-02-10 15:48:46] logging.py:157 >> {'loss': 0.7561, 'learning_rate': 1.3153e-05, 'epoch': 1.96, 'throughput': 628.73}
[INFO|2025-02-10 15:50:33] logging.py:157 >> {'loss': 0.7300, 'learning_rate': 1.0004e-05, 'epoch': 2.08, 'throughput': 629.72}
[INFO|2025-02-10 15:52:41] logging.py:157 >> {'loss': 0.5124, 'learning_rate': 7.1906e-06, 'epoch': 2.23, 'throughput': 630.55}
[INFO|2025-02-10 15:55:04] logging.py:157 >> {'loss': 0.7801, 'learning_rate': 4.7746e-06, 'epoch': 2.37, 'throughput': 630.83}
[INFO|2025-02-10 15:57:38] logging.py:157 >> {'loss': 0.3849, 'learning_rate': 2.8104e-06, 'epoch': 2.51, 'throughput': 629.03}
[INFO|2025-02-10 15:59:56] logging.py:157 >> {'loss': 0.4539, 'learning_rate': 1.3418e-06, 'epoch': 2.65, 'throughput': 629.45}
[INFO|2025-02-10 16:02:20] logging.py:157 >> {'loss': 1.1474, 'learning_rate': 4.0176e-07, 'epoch': 2.79, 'throughput': 628.90}
[INFO|2025-02-10 16:02:20] trainer.py:3887 >> Saving model checkpoint to saves/Qwen2-VL-2B-Instruct-AWQ/lora/250210_Abroad_LoRA_2B_AWQ/checkpoint-100
[INFO|2025-02-10 16:02:21] configuration_utils.py:695 >> loading configuration file config.json from cache at /home/intern01/.cache/huggingface/hub/models--Qwen--Qwen2-VL-2B-Instruct-AWQ/snapshots/4f6ea6d22fcf0f8c1ed64d1d2a3d722d4d7bbcea/config.json
[INFO|2025-02-10 16:02:21] configuration_utils.py:762 >> Model config Qwen2VLConfig {
"_name_or_path": "Qwen/Qwen2-VL-2B-Instruct-319",
"architectures": [
"Qwen2VLForConditionalGeneration"
],
"attention_dropout": 0.0,
"bos_token_id": 151643,
"eos_token_id": 151645,
"hidden_act": "silu",
"hidden_size": 1536,
"image_token_id": 151655,
"initializer_range": 0.02,
"intermediate_size": 8960,
"max_position_embeddings": 32768,
"max_window_layers": 28,
"model_type": "qwen2_vl",
"num_attention_heads": 12,
"num_hidden_layers": 28,
"num_key_value_heads": 2,
"quantization_config": {
"bits": 4,
"group_size": 128,
"modules_to_not_convert": [
"visual"
],
"quant_method": "awq",
"version": "gemm",
"zero_point": true
},
"rms_norm_eps": 1e-06,
"rope_scaling": {
"mrope_section": [
16,
24,
24
],
"rope_type": "default",
"type": "default"
},
"rope_theta": 1000000.0,
"sliding_window": 32768,
"tie_word_embeddings": true,
"torch_dtype": "float16",
"transformers_version": "4.47.1",
"use_cache": true,
"use_sliding_window": false,
"video_token_id": 151656,
"vision_config": {
"hidden_size": 1536,
"in_chans": 3,
"model_type": "qwen2_vl",
"spatial_patch_size": 14
},
"vision_end_token_id": 151653,
"vision_start_token_id": 151652,
"vision_token_id": 151654,
"vocab_size": 151936
}
[INFO|2025-02-10 16:02:21] tokenization_utils_base.py:2485 >> tokenizer config file saved in saves/Qwen2-VL-2B-Instruct-AWQ/lora/250210_Abroad_LoRA_2B_AWQ/checkpoint-100/tokenizer_config.json
[INFO|2025-02-10 16:02:21] tokenization_utils_base.py:2494 >> Special tokens file saved in saves/Qwen2-VL-2B-Instruct-AWQ/lora/250210_Abroad_LoRA_2B_AWQ/checkpoint-100/special_tokens_map.json
[INFO|2025-02-10 16:02:21] image_processing_base.py:261 >> Image processor saved in saves/Qwen2-VL-2B-Instruct-AWQ/lora/250210_Abroad_LoRA_2B_AWQ/checkpoint-100/preprocessor_config.json
[INFO|2025-02-10 16:02:21] tokenization_utils_base.py:2485 >> tokenizer config file saved in saves/Qwen2-VL-2B-Instruct-AWQ/lora/250210_Abroad_LoRA_2B_AWQ/checkpoint-100/tokenizer_config.json
[INFO|2025-02-10 16:02:21] tokenization_utils_base.py:2494 >> Special tokens file saved in saves/Qwen2-VL-2B-Instruct-AWQ/lora/250210_Abroad_LoRA_2B_AWQ/checkpoint-100/special_tokens_map.json
[INFO|2025-02-10 16:02:22] processing_utils.py:546 >> chat template saved in saves/Qwen2-VL-2B-Instruct-AWQ/lora/250210_Abroad_LoRA_2B_AWQ/checkpoint-100/chat_template.json
[INFO|2025-02-10 16:04:55] logging.py:157 >> {'loss': 0.7139, 'learning_rate': 1.1189e-08, 'epoch': 2.93, 'throughput': 627.96}
[INFO|2025-02-10 16:04:55] trainer.py:3887 >> Saving model checkpoint to saves/Qwen2-VL-2B-Instruct-AWQ/lora/250210_Abroad_LoRA_2B_AWQ/checkpoint-105
[INFO|2025-02-10 16:04:55] configuration_utils.py:695 >> loading configuration file config.json from cache at /home/intern01/.cache/huggingface/hub/models--Qwen--Qwen2-VL-2B-Instruct-AWQ/snapshots/4f6ea6d22fcf0f8c1ed64d1d2a3d722d4d7bbcea/config.json
[INFO|2025-02-10 16:04:55] configuration_utils.py:762 >> Model config Qwen2VLConfig {
"_name_or_path": "Qwen/Qwen2-VL-2B-Instruct-319",
"architectures": [
"Qwen2VLForConditionalGeneration"
],
"attention_dropout": 0.0,
"bos_token_id": 151643,
"eos_token_id": 151645,
"hidden_act": "silu",
"hidden_size": 1536,
"image_token_id": 151655,
"initializer_range": 0.02,
"intermediate_size": 8960,
"max_position_embeddings": 32768,
"max_window_layers": 28,
"model_type": "qwen2_vl",
"num_attention_heads": 12,
"num_hidden_layers": 28,
"num_key_value_heads": 2,
"quantization_config": {
"bits": 4,
"group_size": 128,
"modules_to_not_convert": [
"visual"
],
"quant_method": "awq",
"version": "gemm",
"zero_point": true
},
"rms_norm_eps": 1e-06,
"rope_scaling": {
"mrope_section": [
16,
24,
24
],
"rope_type": "default",
"type": "default"
},
"rope_theta": 1000000.0,
"sliding_window": 32768,
"tie_word_embeddings": true,
"torch_dtype": "float16",
"transformers_version": "4.47.1",
"use_cache": true,
"use_sliding_window": false,
"video_token_id": 151656,
"vision_config": {
"hidden_size": 1536,
"in_chans": 3,
"model_type": "qwen2_vl",
"spatial_patch_size": 14
},
"vision_end_token_id": 151653,
"vision_start_token_id": 151652,
"vision_token_id": 151654,
"vocab_size": 151936
}
[INFO|2025-02-10 16:04:55] tokenization_utils_base.py:2485 >> tokenizer config file saved in saves/Qwen2-VL-2B-Instruct-AWQ/lora/250210_Abroad_LoRA_2B_AWQ/checkpoint-105/tokenizer_config.json
[INFO|2025-02-10 16:04:55] tokenization_utils_base.py:2494 >> Special tokens file saved in saves/Qwen2-VL-2B-Instruct-AWQ/lora/250210_Abroad_LoRA_2B_AWQ/checkpoint-105/special_tokens_map.json
[INFO|2025-02-10 16:04:56] image_processing_base.py:261 >> Image processor saved in saves/Qwen2-VL-2B-Instruct-AWQ/lora/250210_Abroad_LoRA_2B_AWQ/checkpoint-105/preprocessor_config.json
[INFO|2025-02-10 16:04:56] tokenization_utils_base.py:2485 >> tokenizer config file saved in saves/Qwen2-VL-2B-Instruct-AWQ/lora/250210_Abroad_LoRA_2B_AWQ/checkpoint-105/tokenizer_config.json
[INFO|2025-02-10 16:04:56] tokenization_utils_base.py:2494 >> Special tokens file saved in saves/Qwen2-VL-2B-Instruct-AWQ/lora/250210_Abroad_LoRA_2B_AWQ/checkpoint-105/special_tokens_map.json
[INFO|2025-02-10 16:04:56] processing_utils.py:546 >> chat template saved in saves/Qwen2-VL-2B-Instruct-AWQ/lora/250210_Abroad_LoRA_2B_AWQ/checkpoint-105/chat_template.json
[INFO|2025-02-10 16:04:56] trainer.py:2636 >>
Training completed. Do not forget to share your model on huggingface.co/models =)
[INFO|2025-02-10 16:04:56] image_processing_base.py:261 >> Image processor saved in saves/Qwen2-VL-2B-Instruct-AWQ/lora/250210_Abroad_LoRA_2B_AWQ/preprocessor_config.json
[INFO|2025-02-10 16:04:56] tokenization_utils_base.py:2485 >> tokenizer config file saved in saves/Qwen2-VL-2B-Instruct-AWQ/lora/250210_Abroad_LoRA_2B_AWQ/tokenizer_config.json
[INFO|2025-02-10 16:04:56] tokenization_utils_base.py:2494 >> Special tokens file saved in saves/Qwen2-VL-2B-Instruct-AWQ/lora/250210_Abroad_LoRA_2B_AWQ/special_tokens_map.json
[INFO|2025-02-10 16:04:57] processing_utils.py:546 >> chat template saved in saves/Qwen2-VL-2B-Instruct-AWQ/lora/250210_Abroad_LoRA_2B_AWQ/chat_template.json
[INFO|2025-02-10 16:04:57] trainer.py:3887 >> Saving model checkpoint to saves/Qwen2-VL-2B-Instruct-AWQ/lora/250210_Abroad_LoRA_2B_AWQ
[INFO|2025-02-10 16:04:57] configuration_utils.py:695 >> loading configuration file config.json from cache at /home/intern01/.cache/huggingface/hub/models--Qwen--Qwen2-VL-2B-Instruct-AWQ/snapshots/4f6ea6d22fcf0f8c1ed64d1d2a3d722d4d7bbcea/config.json
[INFO|2025-02-10 16:04:57] configuration_utils.py:762 >> Model config Qwen2VLConfig {
"_name_or_path": "Qwen/Qwen2-VL-2B-Instruct-319",
"architectures": [
"Qwen2VLForConditionalGeneration"
],
"attention_dropout": 0.0,
"bos_token_id": 151643,
"eos_token_id": 151645,
"hidden_act": "silu",
"hidden_size": 1536,
"image_token_id": 151655,
"initializer_range": 0.02,
"intermediate_size": 8960,
"max_position_embeddings": 32768,
"max_window_layers": 28,
"model_type": "qwen2_vl",
"num_attention_heads": 12,
"num_hidden_layers": 28,
"num_key_value_heads": 2,
"quantization_config": {
"bits": 4,
"group_size": 128,
"modules_to_not_convert": [
"visual"
],
"quant_method": "awq",
"version": "gemm",
"zero_point": true
},
"rms_norm_eps": 1e-06,
"rope_scaling": {
"mrope_section": [
16,
24,
24
],
"rope_type": "default",
"type": "default"
},
"rope_theta": 1000000.0,
"sliding_window": 32768,
"tie_word_embeddings": true,
"torch_dtype": "float16",
"transformers_version": "4.47.1",
"use_cache": true,
"use_sliding_window": false,
"video_token_id": 151656,
"vision_config": {
"hidden_size": 1536,
"in_chans": 3,
"model_type": "qwen2_vl",
"spatial_patch_size": 14
},
"vision_end_token_id": 151653,
"vision_start_token_id": 151652,
"vision_token_id": 151654,
"vocab_size": 151936
}
[INFO|2025-02-10 16:04:57] tokenization_utils_base.py:2485 >> tokenizer config file saved in saves/Qwen2-VL-2B-Instruct-AWQ/lora/250210_Abroad_LoRA_2B_AWQ/tokenizer_config.json
[INFO|2025-02-10 16:04:57] tokenization_utils_base.py:2494 >> Special tokens file saved in saves/Qwen2-VL-2B-Instruct-AWQ/lora/250210_Abroad_LoRA_2B_AWQ/special_tokens_map.json
[WARNING|2025-02-10 16:04:58] logging.py:162 >> No metric eval_loss to plot.
[WARNING|2025-02-10 16:04:58] logging.py:162 >> No metric eval_accuracy to plot.
[INFO|2025-02-10 16:04:58] modelcard.py:449 >> Dropping the following result as it does not have all the necessary fields:
{'task': {'name': 'Causal Language Modeling', 'type': 'text-generation'}}