[INFO|2025-02-10 14:52:09] configuration_utils.py:695 >> loading configuration file config.json from cache at /home/intern01/.cache/huggingface/hub/models--Qwen--Qwen2-VL-7B-Instruct-AWQ/snapshots/6ec2560b0afc3a618d4acc9b8e2967d1642f463d/config.json [INFO|2025-02-10 14:52:09] configuration_utils.py:762 >> Model config Qwen2VLConfig { "_name_or_path": "Qwen/Qwen2-VL-7B-Instruct-AWQ", "architectures": [ "Qwen2VLForConditionalGeneration" ], "attention_dropout": 0.0, "bos_token_id": 151643, "eos_token_id": 151645, "hidden_act": "silu", "hidden_size": 3584, "image_token_id": 151655, "initializer_range": 0.02, "intermediate_size": 18944, "max_position_embeddings": 32768, "max_window_layers": 28, "model_type": "qwen2_vl", "num_attention_heads": 28, "num_hidden_layers": 28, "num_key_value_heads": 4, "quantization_config": { "bits": 4, "group_size": 128, "modules_to_not_convert": [ "visual" ], "quant_method": "awq", "version": "gemm", "zero_point": true }, "rms_norm_eps": 1e-06, "rope_scaling": { "mrope_section": [ 16, 24, 24 ], "rope_type": "default", "type": "default" }, "rope_theta": 1000000.0, "sliding_window": 32768, "tie_word_embeddings": false, "torch_dtype": "float16", "transformers_version": "4.47.1", "use_cache": true, "use_sliding_window": false, "video_token_id": 151656, "vision_config": { "in_chans": 3, "model_type": "qwen2_vl", "spatial_patch_size": 14 }, "vision_end_token_id": 151653, "vision_start_token_id": 151652, "vision_token_id": 151654, "vocab_size": 152064 } [INFO|2025-02-10 14:52:10] tokenization_utils_base.py:2030 >> loading file vocab.json from cache at /home/intern01/.cache/huggingface/hub/models--Qwen--Qwen2-VL-7B-Instruct-AWQ/snapshots/6ec2560b0afc3a618d4acc9b8e2967d1642f463d/vocab.json [INFO|2025-02-10 14:52:10] tokenization_utils_base.py:2030 >> loading file merges.txt from cache at /home/intern01/.cache/huggingface/hub/models--Qwen--Qwen2-VL-7B-Instruct-AWQ/snapshots/6ec2560b0afc3a618d4acc9b8e2967d1642f463d/merges.txt [INFO|2025-02-10 14:52:10] tokenization_utils_base.py:2030 >> loading file tokenizer.json from cache at /home/intern01/.cache/huggingface/hub/models--Qwen--Qwen2-VL-7B-Instruct-AWQ/snapshots/6ec2560b0afc3a618d4acc9b8e2967d1642f463d/tokenizer.json [INFO|2025-02-10 14:52:10] tokenization_utils_base.py:2030 >> loading file added_tokens.json from cache at /home/intern01/.cache/huggingface/hub/models--Qwen--Qwen2-VL-7B-Instruct-AWQ/snapshots/6ec2560b0afc3a618d4acc9b8e2967d1642f463d/added_tokens.json [INFO|2025-02-10 14:52:10] tokenization_utils_base.py:2030 >> loading file special_tokens_map.json from cache at /home/intern01/.cache/huggingface/hub/models--Qwen--Qwen2-VL-7B-Instruct-AWQ/snapshots/6ec2560b0afc3a618d4acc9b8e2967d1642f463d/special_tokens_map.json [INFO|2025-02-10 14:52:10] tokenization_utils_base.py:2030 >> loading file tokenizer_config.json from cache at /home/intern01/.cache/huggingface/hub/models--Qwen--Qwen2-VL-7B-Instruct-AWQ/snapshots/6ec2560b0afc3a618d4acc9b8e2967d1642f463d/tokenizer_config.json [INFO|2025-02-10 14:52:10] tokenization_utils_base.py:2030 >> loading file chat_template.jinja from cache at None [INFO|2025-02-10 14:52:10] tokenization_utils_base.py:2300 >> Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained. [INFO|2025-02-10 14:52:10] image_processing_base.py:378 >> loading configuration file preprocessor_config.json from cache at /home/intern01/.cache/huggingface/hub/models--Qwen--Qwen2-VL-7B-Instruct-AWQ/snapshots/6ec2560b0afc3a618d4acc9b8e2967d1642f463d/preprocessor_config.json [INFO|2025-02-10 14:52:11] image_processing_base.py:378 >> loading configuration file preprocessor_config.json from cache at /home/intern01/.cache/huggingface/hub/models--Qwen--Qwen2-VL-7B-Instruct-AWQ/snapshots/6ec2560b0afc3a618d4acc9b8e2967d1642f463d/preprocessor_config.json [INFO|2025-02-10 14:52:11] image_processing_base.py:432 >> Image processor Qwen2VLImageProcessor { "do_convert_rgb": true, "do_normalize": true, "do_rescale": true, "do_resize": true, "image_mean": [ 0.48145466, 0.4578275, 0.40821073 ], "image_processor_type": "Qwen2VLImageProcessor", "image_std": [ 0.26862954, 0.26130258, 0.27577711 ], "max_pixels": 12845056, "merge_size": 2, "min_pixels": 3136, "patch_size": 14, "processor_class": "Qwen2VLProcessor", "resample": 3, "rescale_factor": 0.00392156862745098, "size": { "max_pixels": 12845056, "min_pixels": 3136 }, "temporal_patch_size": 2 } [INFO|2025-02-10 14:52:11] tokenization_utils_base.py:2030 >> loading file vocab.json from cache at /home/intern01/.cache/huggingface/hub/models--Qwen--Qwen2-VL-7B-Instruct-AWQ/snapshots/6ec2560b0afc3a618d4acc9b8e2967d1642f463d/vocab.json [INFO|2025-02-10 14:52:11] tokenization_utils_base.py:2030 >> loading file merges.txt from cache at /home/intern01/.cache/huggingface/hub/models--Qwen--Qwen2-VL-7B-Instruct-AWQ/snapshots/6ec2560b0afc3a618d4acc9b8e2967d1642f463d/merges.txt [INFO|2025-02-10 14:52:11] tokenization_utils_base.py:2030 >> loading file tokenizer.json from cache at /home/intern01/.cache/huggingface/hub/models--Qwen--Qwen2-VL-7B-Instruct-AWQ/snapshots/6ec2560b0afc3a618d4acc9b8e2967d1642f463d/tokenizer.json [INFO|2025-02-10 14:52:11] tokenization_utils_base.py:2030 >> loading file added_tokens.json from cache at /home/intern01/.cache/huggingface/hub/models--Qwen--Qwen2-VL-7B-Instruct-AWQ/snapshots/6ec2560b0afc3a618d4acc9b8e2967d1642f463d/added_tokens.json [INFO|2025-02-10 14:52:11] tokenization_utils_base.py:2030 >> loading file special_tokens_map.json from cache at /home/intern01/.cache/huggingface/hub/models--Qwen--Qwen2-VL-7B-Instruct-AWQ/snapshots/6ec2560b0afc3a618d4acc9b8e2967d1642f463d/special_tokens_map.json [INFO|2025-02-10 14:52:11] tokenization_utils_base.py:2030 >> loading file tokenizer_config.json from cache at /home/intern01/.cache/huggingface/hub/models--Qwen--Qwen2-VL-7B-Instruct-AWQ/snapshots/6ec2560b0afc3a618d4acc9b8e2967d1642f463d/tokenizer_config.json [INFO|2025-02-10 14:52:11] tokenization_utils_base.py:2030 >> loading file chat_template.jinja from cache at None [INFO|2025-02-10 14:52:11] tokenization_utils_base.py:2300 >> Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained. [INFO|2025-02-10 14:52:12] processing_utils.py:780 >> Processor Qwen2VLProcessor: - image_processor: Qwen2VLImageProcessor { "do_convert_rgb": true, "do_normalize": true, "do_rescale": true, "do_resize": true, "image_mean": [ 0.48145466, 0.4578275, 0.40821073 ], "image_processor_type": "Qwen2VLImageProcessor", "image_std": [ 0.26862954, 0.26130258, 0.27577711 ], "max_pixels": 12845056, "merge_size": 2, "min_pixels": 3136, "patch_size": 14, "processor_class": "Qwen2VLProcessor", "resample": 3, "rescale_factor": 0.00392156862745098, "size": { "max_pixels": 12845056, "min_pixels": 3136 }, "temporal_patch_size": 2 } - tokenizer: Qwen2TokenizerFast(name_or_path='Qwen/Qwen2-VL-7B-Instruct-AWQ', vocab_size=151643, model_max_length=32768, is_fast=True, padding_side='left', truncation_side='right', special_tokens={'eos_token': '<|im_end|>', 'pad_token': '<|endoftext|>', 'additional_special_tokens': ['<|im_start|>', '<|im_end|>', '<|object_ref_start|>', '<|object_ref_end|>', '<|box_start|>', '<|box_end|>', '<|quad_start|>', '<|quad_end|>', '<|vision_start|>', '<|vision_end|>', '<|vision_pad|>', '<|image_pad|>', '<|video_pad|>']}, clean_up_tokenization_spaces=False, added_tokens_decoder={ 151643: AddedToken("<|endoftext|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True), 151644: AddedToken("<|im_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True), 151645: AddedToken("<|im_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True), 151646: AddedToken("<|object_ref_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True), 151647: AddedToken("<|object_ref_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True), 151648: AddedToken("<|box_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True), 151649: AddedToken("<|box_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True), 151650: AddedToken("<|quad_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True), 151651: AddedToken("<|quad_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True), 151652: AddedToken("<|vision_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True), 151653: AddedToken("<|vision_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True), 151654: AddedToken("<|vision_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True), 151655: AddedToken("<|image_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True), 151656: AddedToken("<|video_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True), } ) { "processor_class": "Qwen2VLProcessor" } [INFO|2025-02-10 14:52:12] logging.py:157 >> Add <|im_end|> to stop words. [INFO|2025-02-10 14:52:12] logging.py:157 >> Loading dataset qwen2_vl_dora.json... [INFO|2025-02-10 14:52:25] configuration_utils.py:695 >> loading configuration file config.json from cache at /home/intern01/.cache/huggingface/hub/models--Qwen--Qwen2-VL-7B-Instruct-AWQ/snapshots/6ec2560b0afc3a618d4acc9b8e2967d1642f463d/config.json [INFO|2025-02-10 14:52:25] configuration_utils.py:762 >> Model config Qwen2VLConfig { "_name_or_path": "Qwen/Qwen2-VL-7B-Instruct-AWQ", "architectures": [ "Qwen2VLForConditionalGeneration" ], "attention_dropout": 0.0, "bos_token_id": 151643, "eos_token_id": 151645, "hidden_act": "silu", "hidden_size": 3584, "image_token_id": 151655, "initializer_range": 0.02, "intermediate_size": 18944, "max_position_embeddings": 32768, "max_window_layers": 28, "model_type": "qwen2_vl", "num_attention_heads": 28, "num_hidden_layers": 28, "num_key_value_heads": 4, "quantization_config": { "bits": 4, "group_size": 128, "modules_to_not_convert": [ "visual" ], "quant_method": "awq", "version": "gemm", "zero_point": true }, "rms_norm_eps": 1e-06, "rope_scaling": { "mrope_section": [ 16, 24, 24 ], "rope_type": "default", "type": "default" }, "rope_theta": 1000000.0, "sliding_window": 32768, "tie_word_embeddings": false, "torch_dtype": "float16", "transformers_version": "4.47.1", "use_cache": true, "use_sliding_window": false, "video_token_id": 151656, "vision_config": { "in_chans": 3, "model_type": "qwen2_vl", "spatial_patch_size": 14 }, "vision_end_token_id": 151653, "vision_start_token_id": 151652, "vision_token_id": 151654, "vocab_size": 152064 } [INFO|2025-02-10 14:52:25] logging.py:157 >> Loading 4-bit AWQ-quantized model. [INFO|2025-02-10 14:52:26] modeling_utils.py:3953 >> loading weights file model.safetensors from cache at /home/intern01/.cache/huggingface/hub/models--Qwen--Qwen2-VL-7B-Instruct-AWQ/snapshots/6ec2560b0afc3a618d4acc9b8e2967d1642f463d/model.safetensors.index.json [INFO|2025-02-10 14:56:27] modeling_utils.py:1641 >> Instantiating Qwen2VLForConditionalGeneration model under default dtype torch.float16. [INFO|2025-02-10 14:56:27] configuration_utils.py:1140 >> Generate config GenerationConfig { "bos_token_id": 151643, "eos_token_id": 151645 } [INFO|2025-02-10 14:56:27] modeling_utils.py:1641 >> Instantiating Qwen2VisionTransformerPretrainedModel model under default dtype torch.float16. [WARNING|2025-02-10 14:56:27] logging.py:328 >> `Qwen2VLRotaryEmbedding` can now be fully parameterized by passing the model config through the `config` argument. All other arguments will be removed in v4.46 [INFO|2025-02-10 14:56:30] modeling_utils.py:4849 >> All model checkpoint weights were used when initializing Qwen2VLForConditionalGeneration. [INFO|2025-02-10 14:56:30] modeling_utils.py:4857 >> All the weights of Qwen2VLForConditionalGeneration were initialized from the model checkpoint at Qwen/Qwen2-VL-7B-Instruct-AWQ. If your task is similar to the task the model of the checkpoint was trained on, you can already use Qwen2VLForConditionalGeneration for predictions without further training. [INFO|2025-02-10 14:56:31] configuration_utils.py:1095 >> loading configuration file generation_config.json from cache at /home/intern01/.cache/huggingface/hub/models--Qwen--Qwen2-VL-7B-Instruct-AWQ/snapshots/6ec2560b0afc3a618d4acc9b8e2967d1642f463d/generation_config.json [INFO|2025-02-10 14:56:31] configuration_utils.py:1140 >> Generate config GenerationConfig { "bos_token_id": 151643, "do_sample": true, "eos_token_id": [ 151645, 151643 ], "pad_token_id": 151643, "temperature": 0.01, "top_k": 1, "top_p": 0.001 } [INFO|2025-02-10 14:56:31] logging.py:157 >> Gradient checkpointing enabled. [INFO|2025-02-10 14:56:31] logging.py:157 >> Casting multimodal projector outputs in torch.float16. [INFO|2025-02-10 14:56:31] logging.py:157 >> Using torch SDPA for faster training and inference. [INFO|2025-02-10 14:56:31] logging.py:157 >> Upcasting trainable params to float32. [INFO|2025-02-10 14:56:31] logging.py:157 >> Fine-tuning method: LoRA [INFO|2025-02-10 14:56:31] logging.py:157 >> Found linear modules: k_proj,gate_proj,up_proj,q_proj,down_proj,v_proj,o_proj [INFO|2025-02-10 14:56:31] logging.py:157 >> Set vision model not trainable: ['visual.patch_embed', 'visual.blocks']. [INFO|2025-02-10 14:56:31] logging.py:157 >> trainable params: 20,185,088 || all params: 1,786,143,232 || trainable%: 1.1301 [INFO|2025-02-10 14:56:32] trainer.py:734 >> Using auto half precision backend [INFO|2025-02-10 14:56:32] trainer.py:2362 >> ***** Running training ***** [INFO|2025-02-10 14:56:32] trainer.py:2363 >> Num examples = 565 [INFO|2025-02-10 14:56:32] trainer.py:2364 >> Num Epochs = 3 [INFO|2025-02-10 14:56:32] trainer.py:2365 >> Instantaneous batch size per device = 2 [INFO|2025-02-10 14:56:32] trainer.py:2368 >> Total train batch size (w. parallel, distributed & accumulation) = 16 [INFO|2025-02-10 14:56:32] trainer.py:2369 >> Gradient Accumulation steps = 8 [INFO|2025-02-10 14:56:32] trainer.py:2370 >> Total optimization steps = 105 [INFO|2025-02-10 14:56:32] trainer.py:2371 >> Number of trainable parameters = 20,185,088 [INFO|2025-02-10 14:59:53] logging.py:157 >> {'loss': 4.4182, 'learning_rate': 4.9821e-05, 'epoch': 0.14, 'throughput': 444.43} [INFO|2025-02-10 15:02:54] logging.py:157 >> {'loss': 2.2953, 'learning_rate': 4.9287e-05, 'epoch': 0.28, 'throughput': 454.06} [INFO|2025-02-10 15:05:52] logging.py:157 >> {'loss': 2.6132, 'learning_rate': 4.8133e-05, 'epoch': 0.42, 'throughput': 457.68} [INFO|2025-02-10 15:09:25] logging.py:157 >> {'loss': 2.6473, 'learning_rate': 4.6461e-05, 'epoch': 0.57, 'throughput': 457.95} [INFO|2025-02-10 15:12:42] logging.py:157 >> {'loss': 1.9798, 'learning_rate': 4.4310e-05, 'epoch': 0.71, 'throughput': 458.77} [INFO|2025-02-10 15:15:59] logging.py:157 >> {'loss': 1.6494, 'learning_rate': 4.1728e-05, 'epoch': 0.85, 'throughput': 459.28} [INFO|2025-02-10 15:19:08] logging.py:157 >> {'loss': 1.2652, 'learning_rate': 3.8772e-05, 'epoch': 0.99, 'throughput': 458.59} [INFO|2025-02-10 15:22:05] logging.py:157 >> {'loss': 1.1679, 'learning_rate': 3.5509e-05, 'epoch': 1.11, 'throughput': 458.57} [INFO|2025-02-10 15:25:26] logging.py:157 >> {'loss': 0.7785, 'learning_rate': 3.2011e-05, 'epoch': 1.25, 'throughput': 458.89} [INFO|2025-02-10 15:28:35] logging.py:157 >> {'loss': 1.3648, 'learning_rate': 2.8356e-05, 'epoch': 1.40, 'throughput': 457.64} [INFO|2025-02-10 15:28:35] trainer.py:3887 >> Saving model checkpoint to saves/Qwen2-VL-7B-Instruct-AWQ/lora/250210_Abroad_LoRA_7B_AWQ/checkpoint-50 [INFO|2025-02-10 15:28:35] configuration_utils.py:695 >> loading configuration file config.json from cache at /home/intern01/.cache/huggingface/hub/models--Qwen--Qwen2-VL-7B-Instruct-AWQ/snapshots/6ec2560b0afc3a618d4acc9b8e2967d1642f463d/config.json [INFO|2025-02-10 15:28:35] configuration_utils.py:762 >> Model config Qwen2VLConfig { "_name_or_path": "Qwen/Qwen2-VL-7B-Instruct-AWQ", "architectures": [ "Qwen2VLForConditionalGeneration" ], "attention_dropout": 0.0, "bos_token_id": 151643, "eos_token_id": 151645, "hidden_act": "silu", "hidden_size": 3584, "image_token_id": 151655, "initializer_range": 0.02, "intermediate_size": 18944, "max_position_embeddings": 32768, "max_window_layers": 28, "model_type": "qwen2_vl", "num_attention_heads": 28, "num_hidden_layers": 28, "num_key_value_heads": 4, "quantization_config": { "bits": 4, "group_size": 128, "modules_to_not_convert": [ "visual" ], "quant_method": "awq", "version": "gemm", "zero_point": true }, "rms_norm_eps": 1e-06, "rope_scaling": { "mrope_section": [ 16, 24, 24 ], "rope_type": "default", "type": "default" }, "rope_theta": 1000000.0, "sliding_window": 32768, "tie_word_embeddings": false, "torch_dtype": "float16", "transformers_version": "4.47.1", "use_cache": true, "use_sliding_window": false, "video_token_id": 151656, "vision_config": { "in_chans": 3, "model_type": "qwen2_vl", "spatial_patch_size": 14 }, "vision_end_token_id": 151653, "vision_start_token_id": 151652, "vision_token_id": 151654, "vocab_size": 152064 } [INFO|2025-02-10 15:28:35] tokenization_utils_base.py:2485 >> tokenizer config file saved in saves/Qwen2-VL-7B-Instruct-AWQ/lora/250210_Abroad_LoRA_7B_AWQ/checkpoint-50/tokenizer_config.json [INFO|2025-02-10 15:28:35] tokenization_utils_base.py:2494 >> Special tokens file saved in saves/Qwen2-VL-7B-Instruct-AWQ/lora/250210_Abroad_LoRA_7B_AWQ/checkpoint-50/special_tokens_map.json [INFO|2025-02-10 15:28:36] image_processing_base.py:261 >> Image processor saved in saves/Qwen2-VL-7B-Instruct-AWQ/lora/250210_Abroad_LoRA_7B_AWQ/checkpoint-50/preprocessor_config.json [INFO|2025-02-10 15:28:36] tokenization_utils_base.py:2485 >> tokenizer config file saved in saves/Qwen2-VL-7B-Instruct-AWQ/lora/250210_Abroad_LoRA_7B_AWQ/checkpoint-50/tokenizer_config.json [INFO|2025-02-10 15:28:36] tokenization_utils_base.py:2494 >> Special tokens file saved in saves/Qwen2-VL-7B-Instruct-AWQ/lora/250210_Abroad_LoRA_7B_AWQ/checkpoint-50/special_tokens_map.json [INFO|2025-02-10 15:28:36] processing_utils.py:546 >> chat template saved in saves/Qwen2-VL-7B-Instruct-AWQ/lora/250210_Abroad_LoRA_7B_AWQ/checkpoint-50/chat_template.json [INFO|2025-02-10 15:31:50] logging.py:157 >> {'loss': 1.1121, 'learning_rate': 2.4626e-05, 'epoch': 1.54, 'throughput': 457.57} [INFO|2025-02-10 15:35:12] logging.py:157 >> {'loss': 1.1847, 'learning_rate': 2.0905e-05, 'epoch': 1.68, 'throughput': 456.40} [INFO|2025-02-10 15:38:13] logging.py:157 >> {'loss': 0.4018, 'learning_rate': 1.7275e-05, 'epoch': 1.82, 'throughput': 456.99} [INFO|2025-02-10 15:41:30] logging.py:157 >> {'loss': 0.7509, 'learning_rate': 1.3817e-05, 'epoch': 1.96, 'throughput': 457.33} [INFO|2025-02-10 15:43:59] logging.py:157 >> {'loss': 0.4600, 'learning_rate': 1.0610e-05, 'epoch': 2.08, 'throughput': 457.74} [INFO|2025-02-10 15:46:57] logging.py:157 >> {'loss': 0.4216, 'learning_rate': 7.7234e-06, 'epoch': 2.23, 'throughput': 458.16} [INFO|2025-02-10 15:50:14] logging.py:157 >> {'loss': 0.5816, 'learning_rate': 5.2232e-06, 'epoch': 2.37, 'throughput': 458.34} [INFO|2025-02-10 15:53:44] logging.py:157 >> {'loss': 0.4955, 'learning_rate': 3.1648e-06, 'epoch': 2.51, 'throughput': 457.37} [INFO|2025-02-10 15:56:54] logging.py:157 >> {'loss': 0.2691, 'learning_rate': 1.5941e-06, 'epoch': 2.65, 'throughput': 457.51} [INFO|2025-02-10 16:00:12] logging.py:157 >> {'loss': 1.1078, 'learning_rate': 5.4631e-07, 'epoch': 2.79, 'throughput': 457.20} [INFO|2025-02-10 16:00:12] trainer.py:3887 >> Saving model checkpoint to saves/Qwen2-VL-7B-Instruct-AWQ/lora/250210_Abroad_LoRA_7B_AWQ/checkpoint-100 [INFO|2025-02-10 16:00:12] configuration_utils.py:695 >> loading configuration file config.json from cache at /home/intern01/.cache/huggingface/hub/models--Qwen--Qwen2-VL-7B-Instruct-AWQ/snapshots/6ec2560b0afc3a618d4acc9b8e2967d1642f463d/config.json [INFO|2025-02-10 16:00:12] configuration_utils.py:762 >> Model config Qwen2VLConfig { "_name_or_path": "Qwen/Qwen2-VL-7B-Instruct-AWQ", "architectures": [ "Qwen2VLForConditionalGeneration" ], "attention_dropout": 0.0, "bos_token_id": 151643, "eos_token_id": 151645, "hidden_act": "silu", "hidden_size": 3584, "image_token_id": 151655, "initializer_range": 0.02, "intermediate_size": 18944, "max_position_embeddings": 32768, "max_window_layers": 28, "model_type": "qwen2_vl", "num_attention_heads": 28, "num_hidden_layers": 28, "num_key_value_heads": 4, "quantization_config": { "bits": 4, "group_size": 128, "modules_to_not_convert": [ "visual" ], "quant_method": "awq", "version": "gemm", "zero_point": true }, "rms_norm_eps": 1e-06, "rope_scaling": { "mrope_section": [ 16, 24, 24 ], "rope_type": "default", "type": "default" }, "rope_theta": 1000000.0, "sliding_window": 32768, "tie_word_embeddings": false, "torch_dtype": "float16", "transformers_version": "4.47.1", "use_cache": true, "use_sliding_window": false, "video_token_id": 151656, "vision_config": { "in_chans": 3, "model_type": "qwen2_vl", "spatial_patch_size": 14 }, "vision_end_token_id": 151653, "vision_start_token_id": 151652, "vision_token_id": 151654, "vocab_size": 152064 } [INFO|2025-02-10 16:00:13] tokenization_utils_base.py:2485 >> tokenizer config file saved in saves/Qwen2-VL-7B-Instruct-AWQ/lora/250210_Abroad_LoRA_7B_AWQ/checkpoint-100/tokenizer_config.json [INFO|2025-02-10 16:00:13] tokenization_utils_base.py:2494 >> Special tokens file saved in saves/Qwen2-VL-7B-Instruct-AWQ/lora/250210_Abroad_LoRA_7B_AWQ/checkpoint-100/special_tokens_map.json [INFO|2025-02-10 16:00:13] image_processing_base.py:261 >> Image processor saved in saves/Qwen2-VL-7B-Instruct-AWQ/lora/250210_Abroad_LoRA_7B_AWQ/checkpoint-100/preprocessor_config.json [INFO|2025-02-10 16:00:13] tokenization_utils_base.py:2485 >> tokenizer config file saved in saves/Qwen2-VL-7B-Instruct-AWQ/lora/250210_Abroad_LoRA_7B_AWQ/checkpoint-100/tokenizer_config.json [INFO|2025-02-10 16:00:13] tokenization_utils_base.py:2494 >> Special tokens file saved in saves/Qwen2-VL-7B-Instruct-AWQ/lora/250210_Abroad_LoRA_7B_AWQ/checkpoint-100/special_tokens_map.json [INFO|2025-02-10 16:00:14] processing_utils.py:546 >> chat template saved in saves/Qwen2-VL-7B-Instruct-AWQ/lora/250210_Abroad_LoRA_7B_AWQ/checkpoint-100/chat_template.json [INFO|2025-02-10 16:03:44] logging.py:157 >> {'loss': 0.6358, 'learning_rate': 4.4747e-08, 'epoch': 2.93, 'throughput': 456.62} [INFO|2025-02-10 16:03:44] trainer.py:3887 >> Saving model checkpoint to saves/Qwen2-VL-7B-Instruct-AWQ/lora/250210_Abroad_LoRA_7B_AWQ/checkpoint-105 [INFO|2025-02-10 16:03:44] configuration_utils.py:695 >> loading configuration file config.json from cache at /home/intern01/.cache/huggingface/hub/models--Qwen--Qwen2-VL-7B-Instruct-AWQ/snapshots/6ec2560b0afc3a618d4acc9b8e2967d1642f463d/config.json [INFO|2025-02-10 16:03:44] configuration_utils.py:762 >> Model config Qwen2VLConfig { "_name_or_path": "Qwen/Qwen2-VL-7B-Instruct-AWQ", "architectures": [ "Qwen2VLForConditionalGeneration" ], "attention_dropout": 0.0, "bos_token_id": 151643, "eos_token_id": 151645, "hidden_act": "silu", "hidden_size": 3584, "image_token_id": 151655, "initializer_range": 0.02, "intermediate_size": 18944, "max_position_embeddings": 32768, "max_window_layers": 28, "model_type": "qwen2_vl", "num_attention_heads": 28, "num_hidden_layers": 28, "num_key_value_heads": 4, "quantization_config": { "bits": 4, "group_size": 128, "modules_to_not_convert": [ "visual" ], "quant_method": "awq", "version": "gemm", "zero_point": true }, "rms_norm_eps": 1e-06, "rope_scaling": { "mrope_section": [ 16, 24, 24 ], "rope_type": "default", "type": "default" }, "rope_theta": 1000000.0, "sliding_window": 32768, "tie_word_embeddings": false, "torch_dtype": "float16", "transformers_version": "4.47.1", "use_cache": true, "use_sliding_window": false, "video_token_id": 151656, "vision_config": { "in_chans": 3, "model_type": "qwen2_vl", "spatial_patch_size": 14 }, "vision_end_token_id": 151653, "vision_start_token_id": 151652, "vision_token_id": 151654, "vocab_size": 152064 } [INFO|2025-02-10 16:03:44] tokenization_utils_base.py:2485 >> tokenizer config file saved in saves/Qwen2-VL-7B-Instruct-AWQ/lora/250210_Abroad_LoRA_7B_AWQ/checkpoint-105/tokenizer_config.json [INFO|2025-02-10 16:03:44] tokenization_utils_base.py:2494 >> Special tokens file saved in saves/Qwen2-VL-7B-Instruct-AWQ/lora/250210_Abroad_LoRA_7B_AWQ/checkpoint-105/special_tokens_map.json [INFO|2025-02-10 16:03:45] image_processing_base.py:261 >> Image processor saved in saves/Qwen2-VL-7B-Instruct-AWQ/lora/250210_Abroad_LoRA_7B_AWQ/checkpoint-105/preprocessor_config.json [INFO|2025-02-10 16:03:45] tokenization_utils_base.py:2485 >> tokenizer config file saved in saves/Qwen2-VL-7B-Instruct-AWQ/lora/250210_Abroad_LoRA_7B_AWQ/checkpoint-105/tokenizer_config.json [INFO|2025-02-10 16:03:45] tokenization_utils_base.py:2494 >> Special tokens file saved in saves/Qwen2-VL-7B-Instruct-AWQ/lora/250210_Abroad_LoRA_7B_AWQ/checkpoint-105/special_tokens_map.json [INFO|2025-02-10 16:03:45] processing_utils.py:546 >> chat template saved in saves/Qwen2-VL-7B-Instruct-AWQ/lora/250210_Abroad_LoRA_7B_AWQ/checkpoint-105/chat_template.json [INFO|2025-02-10 16:03:45] trainer.py:2636 >> Training completed. Do not forget to share your model on huggingface.co/models =) [INFO|2025-02-10 16:03:45] image_processing_base.py:261 >> Image processor saved in saves/Qwen2-VL-7B-Instruct-AWQ/lora/250210_Abroad_LoRA_7B_AWQ/preprocessor_config.json [INFO|2025-02-10 16:03:45] tokenization_utils_base.py:2485 >> tokenizer config file saved in saves/Qwen2-VL-7B-Instruct-AWQ/lora/250210_Abroad_LoRA_7B_AWQ/tokenizer_config.json [INFO|2025-02-10 16:03:45] tokenization_utils_base.py:2494 >> Special tokens file saved in saves/Qwen2-VL-7B-Instruct-AWQ/lora/250210_Abroad_LoRA_7B_AWQ/special_tokens_map.json [INFO|2025-02-10 16:03:46] processing_utils.py:546 >> chat template saved in saves/Qwen2-VL-7B-Instruct-AWQ/lora/250210_Abroad_LoRA_7B_AWQ/chat_template.json [INFO|2025-02-10 16:03:46] trainer.py:3887 >> Saving model checkpoint to saves/Qwen2-VL-7B-Instruct-AWQ/lora/250210_Abroad_LoRA_7B_AWQ [INFO|2025-02-10 16:03:46] configuration_utils.py:695 >> loading configuration file config.json from cache at /home/intern01/.cache/huggingface/hub/models--Qwen--Qwen2-VL-7B-Instruct-AWQ/snapshots/6ec2560b0afc3a618d4acc9b8e2967d1642f463d/config.json [INFO|2025-02-10 16:03:46] configuration_utils.py:762 >> Model config Qwen2VLConfig { "_name_or_path": "Qwen/Qwen2-VL-7B-Instruct-AWQ", "architectures": [ "Qwen2VLForConditionalGeneration" ], "attention_dropout": 0.0, "bos_token_id": 151643, "eos_token_id": 151645, "hidden_act": "silu", "hidden_size": 3584, "image_token_id": 151655, "initializer_range": 0.02, "intermediate_size": 18944, "max_position_embeddings": 32768, "max_window_layers": 28, "model_type": "qwen2_vl", "num_attention_heads": 28, "num_hidden_layers": 28, "num_key_value_heads": 4, "quantization_config": { "bits": 4, "group_size": 128, "modules_to_not_convert": [ "visual" ], "quant_method": "awq", "version": "gemm", "zero_point": true }, "rms_norm_eps": 1e-06, "rope_scaling": { "mrope_section": [ 16, 24, 24 ], "rope_type": "default", "type": "default" }, "rope_theta": 1000000.0, "sliding_window": 32768, "tie_word_embeddings": false, "torch_dtype": "float16", "transformers_version": "4.47.1", "use_cache": true, "use_sliding_window": false, "video_token_id": 151656, "vision_config": { "in_chans": 3, "model_type": "qwen2_vl", "spatial_patch_size": 14 }, "vision_end_token_id": 151653, "vision_start_token_id": 151652, "vision_token_id": 151654, "vocab_size": 152064 } [INFO|2025-02-10 16:03:46] tokenization_utils_base.py:2485 >> tokenizer config file saved in saves/Qwen2-VL-7B-Instruct-AWQ/lora/250210_Abroad_LoRA_7B_AWQ/tokenizer_config.json [INFO|2025-02-10 16:03:46] tokenization_utils_base.py:2494 >> Special tokens file saved in saves/Qwen2-VL-7B-Instruct-AWQ/lora/250210_Abroad_LoRA_7B_AWQ/special_tokens_map.json [WARNING|2025-02-10 16:03:47] logging.py:162 >> No metric eval_loss to plot. [WARNING|2025-02-10 16:03:47] logging.py:162 >> No metric eval_accuracy to plot. [INFO|2025-02-10 16:03:47] modelcard.py:449 >> Dropping the following result as it does not have all the necessary fields: {'task': {'name': 'Causal Language Modeling', 'type': 'text-generation'}}