/opt/conda/envs/py310/bin/python -m mlc_llm gen_config /models/Phi-3-mini-128k-instruct --quantization q4f32_1 --conv-template phi-3 --output /models/mlc-delivery/hf/mlc-ai/Phi-3-mini-128k-instruct-q4f32_1-MLC [2024-06-02 06:08:17] INFO auto_config.py:116: Found model configuration: /models/Phi-3-mini-128k-instruct/config.json [2024-06-02 06:08:17] INFO auto_config.py:154: Found model type: phi3. Use `--model-type` to override. [2024-06-02 06:08:17] INFO phi3_model.py:53: context_window_size not found in config.json. Falling back to max_position_embeddings (131072) [2024-06-02 06:08:17] INFO phi3_model.py:68: prefill_chunk_size defaults to 2048 [2024-06-02 06:08:17] INFO config.py:107: Overriding max_batch_size from 1 to 80 [2024-06-02 06:08:17] INFO gen_config.py:143: [generation_config.json] Setting bos_token_id: 1 [2024-06-02 06:08:17] INFO gen_config.py:143: [generation_config.json] Setting eos_token_id: [32000, 32001, 32007] [2024-06-02 06:08:17] INFO gen_config.py:143: [generation_config.json] Setting pad_token_id: 32000 [2024-06-02 06:08:17] INFO gen_config.py:155: Found tokenizer config: /models/Phi-3-mini-128k-instruct/tokenizer.model. Copying to /models/mlc-delivery/hf/mlc-ai/Phi-3-mini-128k-instruct-q4f32_1-MLC/tokenizer.model [2024-06-02 06:08:17] INFO gen_config.py:155: Found tokenizer config: /models/Phi-3-mini-128k-instruct/tokenizer.json. Copying to /models/mlc-delivery/hf/mlc-ai/Phi-3-mini-128k-instruct-q4f32_1-MLC/tokenizer.json [2024-06-02 06:08:17] INFO gen_config.py:157: Not found tokenizer config: /models/Phi-3-mini-128k-instruct/vocab.json [2024-06-02 06:08:17] INFO gen_config.py:157: Not found tokenizer config: /models/Phi-3-mini-128k-instruct/merges.txt [2024-06-02 06:08:17] INFO gen_config.py:155: Found tokenizer config: /models/Phi-3-mini-128k-instruct/added_tokens.json. Copying to /models/mlc-delivery/hf/mlc-ai/Phi-3-mini-128k-instruct-q4f32_1-MLC/added_tokens.json [2024-06-02 06:08:17] INFO gen_config.py:155: Found tokenizer config: /models/Phi-3-mini-128k-instruct/tokenizer_config.json. Copying to /models/mlc-delivery/hf/mlc-ai/Phi-3-mini-128k-instruct-q4f32_1-MLC/tokenizer_config.json [2024-06-02 06:08:17] INFO gen_config.py:216: Detected tokenizer info: {'token_postproc_method': 'byte_fallback', 'prepend_space_in_encode': True, 'strip_space_in_decode': True} [2024-06-02 06:08:17] INFO gen_config.py:32: [System default] Setting temperature: 1.0 [2024-06-02 06:08:17] INFO gen_config.py:32: [System default] Setting presence_penalty: 0.0 [2024-06-02 06:08:17] INFO gen_config.py:32: [System default] Setting frequency_penalty: 0.0 [2024-06-02 06:08:17] INFO gen_config.py:32: [System default] Setting repetition_penalty: 1.0 [2024-06-02 06:08:17] INFO gen_config.py:32: [System default] Setting top_p: 1.0 [2024-06-02 06:08:17] INFO gen_config.py:32: [System default] Setting mean_gen_len: 128 [2024-06-02 06:08:17] INFO gen_config.py:32: [System default] Setting max_gen_len: 512 [2024-06-02 06:08:17] INFO gen_config.py:32: [System default] Setting shift_fill_factor: 0.3 [2024-06-02 06:08:17] INFO gen_config.py:223: Dumping configuration file to: /models/mlc-delivery/hf/mlc-ai/Phi-3-mini-128k-instruct-q4f32_1-MLC/mlc-chat-config.json /opt/conda/envs/py310/bin/python -m mlc_llm convert_weight /models/Phi-3-mini-128k-instruct --quantization q4f32_1 --output /models/mlc-delivery/hf/mlc-ai/Phi-3-mini-128k-instruct-q4f32_1-MLC [2024-06-02 06:08:19] INFO auto_config.py:116: Found model configuration: /models/Phi-3-mini-128k-instruct/config.json [2024-06-02 06:08:20] INFO auto_device.py:79: Found device: cuda:0 [2024-06-02 06:08:21] INFO auto_device.py:88: Not found device: rocm:0 [2024-06-02 06:08:23] INFO auto_device.py:88: Not found device: metal:0 [2024-06-02 06:08:24] INFO auto_device.py:79: Found device: vulkan:0 [2024-06-02 06:08:24] INFO auto_device.py:79: Found device: vulkan:1 [2024-06-02 06:08:24] INFO auto_device.py:79: Found device: vulkan:2 [2024-06-02 06:08:24] INFO auto_device.py:79: Found device: vulkan:3 [2024-06-02 06:08:26] INFO auto_device.py:88: Not found device: opencl:0 [2024-06-02 06:08:26] INFO auto_device.py:35: Using device: cuda:0 [2024-06-02 06:08:26] INFO auto_weight.py:71: Finding weights in: /models/Phi-3-mini-128k-instruct [2024-06-02 06:08:26] INFO auto_weight.py:137: Not found Huggingface PyTorch [2024-06-02 06:08:26] INFO auto_weight.py:144: Found source weight format: huggingface-safetensor. Source configuration: /models/Phi-3-mini-128k-instruct/model.safetensors.index.json [2024-06-02 06:08:26] INFO auto_weight.py:107: Using source weight configuration: /models/Phi-3-mini-128k-instruct/model.safetensors.index.json. Use `--source` to override. [2024-06-02 06:08:26] INFO auto_weight.py:111: Using source weight format: huggingface-safetensor. Use `--source-format` to override. [2024-06-02 06:08:26] INFO auto_config.py:154: Found model type: phi3. Use `--model-type` to override. [2024-06-02 06:08:26] INFO phi3_model.py:53: context_window_size not found in config.json. Falling back to max_position_embeddings (131072) [2024-06-02 06:08:26] INFO phi3_model.py:68: prefill_chunk_size defaults to 2048 Weight conversion with arguments: --config /models/Phi-3-mini-128k-instruct/config.json --quantization GroupQuantize(name='q4f32_1', kind='group-quant', group_size=32, quantize_dtype='int4', storage_dtype='uint32', model_dtype='float32', linear_weight_layout='NK', quantize_embedding=True, quantize_final_fc=True, num_elem_per_storage=8, num_storage_per_group=4, max_int_value=7) --model-type phi3 --device cuda:0 --source /models/Phi-3-mini-128k-instruct/model.safetensors.index.json --source-format huggingface-safetensor --output /models/mlc-delivery/hf/mlc-ai/Phi-3-mini-128k-instruct-q4f32_1-MLC Start storing to cache /models/mlc-delivery/hf/mlc-ai/Phi-3-mini-128k-instruct-q4f32_1-MLC 0%| | 0/195 [00:00 type is zero. setattr(self, word, getattr(machar, word).flat[0]) /home/rickzhou/miniconda3/envs/mlc/lib/python3.11/site-packages/numpy/core/getlimits.py:89: UserWarning: The value of the smallest subnormal for type is zero. return self._float_to_str(self.smallest_subnormal) 1%| | 1/195 [00:02<09:32, 2.95s/it] [2024-06-02 04:04:10] INFO huggingface_loader.py:174: [Not quantized] Parameter: "transformer.h.21.ln.weight", shape: (3072,), dtype: float32 1%| | 1/195 [00:02<09:32, 2.95s/it] [2024-06-02 04:04:10] INFO group_quantization.py:217: Compiling quantize function for key: ((3072, 8192), float32, cuda, axis=1, output_transpose=False) 1%| | 1/195 [00:02<09:32, 2.95s/it] [2024-06-02 04:04:11] INFO huggingface_loader.py:166: [Quantized] Parameter: "transformer.h.21.mlp.down_proj.q_weight", shape: (3072, 1024), dtype: uint32 1%| | 1/195 [00:03<09:32, 2.95s/it] [2024-06-02 04:04:11] INFO huggingface_loader.py:166: [Quantized] Parameter: "transformer.h.21.mlp.down_proj.q_scale", shape: (3072, 256), dtype: float32 1%| | 1/195 [00:03<09:32, 2.95s/it] 2%|▏ | 3/195 [00:03<02:50, 1.13it/s] [2024-06-02 04:04:11] INFO group_quantization.py:217: Compiling quantize function for key: ((16384, 3072), float32, cuda, axis=1, output_transpose=False) 2%|▏ | 3/195 [00:03<02:50, 1.13it/s] [2024-06-02 04:04:11] INFO huggingface_loader.py:166: [Quantized] Parameter: "transformer.h.21.mlp.gate_up_proj.q_weight", shape: (16384, 384), dtype: uint32 2%|▏ | 3/195 [00:03<02:50, 1.13it/s] [2024-06-02 04:04:11] INFO huggingface_loader.py:166: [Quantized] Parameter: "transformer.h.21.mlp.gate_up_proj.q_scale", shape: (16384, 96), dtype: float32 2%|▏ | 3/195 [00:03<02:50, 1.13it/s] 2%|▏ | 4/195 [00:03<02:21, 1.35it/s] [2024-06-02 04:04:11] INFO huggingface_loader.py:174: [Not quantized] Parameter: "transformer.h.21.post_attention_layernorm.weight", shape: (3072,), dtype: float32 2%|▏ | 4/195 [00:03<02:21, 1.35it/s] [2024-06-02 04:04:11] INFO group_quantization.py:217: Compiling quantize function for key: ((9216, 3072), float32, cuda, axis=1, output_transpose=False) 2%|▏ | 4/195 [00:03<02:21, 1.35it/s] [2024-06-02 04:04:11] INFO huggingface_loader.py:166: [Quantized] Parameter: "transformer.h.21.mixer.qkv_proj.q_weight", shape: (9216, 384), dtype: uint32 2%|▏ | 4/195 [00:04<02:21, 1.35it/s] [2024-06-02 04:04:11] INFO huggingface_loader.py:166: [Quantized] Parameter: "transformer.h.21.mixer.qkv_proj.q_scale", shape: (9216, 96), dtype: float32 2%|▏ | 4/195 [00:04<02:21, 1.35it/s] 3%|▎ | 6/195 [00:04<01:30, 2.09it/s] [2024-06-02 04:04:12] INFO huggingface_loader.py:174: [Not quantized] Parameter: "transformer.h.22.ln.weight", shape: (3072,), dtype: float32 3%|▎ | 6/195 [00:04<01:30, 2.09it/s] [2024-06-02 04:04:12] INFO huggingface_loader.py:166: [Quantized] Parameter: "transformer.h.22.mlp.down_proj.q_weight", shape: (3072, 1024), dtype: uint32 3%|▎ | 6/195 [00:04<01:30, 2.09it/s] [2024-06-02 04:04:12] INFO huggingface_loader.py:166: [Quantized] Parameter: "transformer.h.22.mlp.down_proj.q_scale", shape: (3072, 256), dtype: float32 3%|▎ | 6/195 [00:04<01:30, 2.09it/s] [2024-06-02 04:04:12] INFO huggingface_loader.py:166: [Quantized] Parameter: "transformer.h.22.mlp.gate_up_proj.q_weight", shape: (16384, 384), dtype: uint32 3%|▎ | 6/195 [00:04<01:30, 2.09it/s] [2024-06-02 04:04:12] INFO huggingface_loader.py:166: [Quantized] Parameter: "transformer.h.22.mlp.gate_up_proj.q_scale", shape: (16384, 96), dtype: float32 3%|▎ | 6/195 [00:04<01:30, 2.09it/s] 5%|▍ | 9/195 [00:04<00:49, 3.79it/s] [2024-06-02 04:04:12] INFO huggingface_loader.py:174: [Not quantized] Parameter: "transformer.h.22.post_attention_layernorm.weight", shape: (3072,), dtype: float32 5%|▍ | 9/195 [00:04<00:49, 3.79it/s] [2024-06-02 04:04:12] INFO group_quantization.py:217: Compiling quantize function for key: ((3072, 3072), float32, cuda, axis=1, output_transpose=False) 5%|▍ | 9/195 [00:04<00:49, 3.79it/s] [2024-06-02 04:04:12] INFO huggingface_loader.py:166: [Quantized] Parameter: "transformer.h.22.mixer.out_proj.q_weight", shape: (3072, 384), dtype: uint32 5%|▍ | 9/195 [00:04<00:49, 3.79it/s] [2024-06-02 04:04:12] INFO huggingface_loader.py:166: [Quantized] Parameter: "transformer.h.22.mixer.out_proj.q_scale", shape: (3072, 96), dtype: float32 5%|▍ | 9/195 [00:04<00:49, 3.79it/s] 6%|▌ | 11/195 [00:04<00:45, 4.03it/s] [2024-06-02 04:04:12] INFO huggingface_loader.py:166: [Quantized] Parameter: "transformer.h.22.mixer.qkv_proj.q_weight", shape: (9216, 384), dtype: uint32 6%|▌ | 11/195 [00:04<00:45, 4.03it/s] [2024-06-02 04:04:12] INFO huggingface_loader.py:166: [Quantized] Parameter: "transformer.h.22.mixer.qkv_proj.q_scale", shape: (9216, 96), dtype: float32 6%|▌ | 11/195 [00:04<00:45, 4.03it/s] [2024-06-02 04:04:12] INFO huggingface_loader.py:174: [Not quantized] Parameter: "transformer.h.23.ln.weight", shape: (3072,), dtype: float32 6%|▌ | 11/195 [00:04<00:45, 4.03it/s] [2024-06-02 04:04:12] INFO huggingface_loader.py:166: [Quantized] Parameter: "transformer.h.23.mlp.down_proj.q_weight", shape: (3072, 1024), dtype: uint32 6%|▌ | 11/195 [00:04<00:45, 4.03it/s] [2024-06-02 04:04:12] INFO huggingface_loader.py:166: [Quantized] Parameter: "transformer.h.23.mlp.down_proj.q_scale", shape: (3072, 256), dtype: float32 6%|▌ | 11/195 [00:04<00:45, 4.03it/s] 7%|▋ | 14/195 [00:04<00:29, 6.04it/s] [2024-06-02 04:04:12] INFO huggingface_loader.py:166: [Quantized] Parameter: "transformer.h.23.mlp.gate_up_proj.q_weight", shape: (16384, 384), dtype: uint32 7%|▋ | 14/195 [00:04<00:29, 6.04it/s] [2024-06-02 04:04:12] INFO huggingface_loader.py:166: [Quantized] Parameter: "transformer.h.23.mlp.gate_up_proj.q_scale", shape: (16384, 96), dtype: float32 7%|▋ | 14/195 [00:05<00:29, 6.04it/s] [2024-06-02 04:04:12] INFO huggingface_loader.py:174: [Not quantized] Parameter: "transformer.h.23.post_attention_layernorm.weight", shape: (3072,), dtype: float32 7%|▋ | 14/195 [00:05<00:29, 6.04it/s] 8%|▊ | 16/195 [00:05<00:25, 7.06it/s] [2024-06-02 04:04:12] INFO huggingface_loader.py:166: [Quantized] Parameter: "transformer.h.23.mixer.out_proj.q_weight", shape: (3072, 384), dtype: uint32 8%|▊ | 16/195 [00:05<00:25, 7.06it/s] [2024-06-02 04:04:12] INFO huggingface_loader.py:166: [Quantized] Parameter: "transformer.h.23.mixer.out_proj.q_scale", shape: (3072, 96), dtype: float32 8%|▊ | 16/195 [00:05<00:25, 7.06it/s] [2024-06-02 04:04:12] INFO huggingface_loader.py:166: [Quantized] Parameter: "transformer.h.23.mixer.qkv_proj.q_weight", shape: (9216, 384), dtype: uint32 8%|▊ | 16/195 [00:05<00:25, 7.06it/s] [2024-06-02 04:04:13] INFO huggingface_loader.py:166: [Quantized] Parameter: "transformer.h.23.mixer.qkv_proj.q_scale", shape: (9216, 96), dtype: float32 8%|▊ | 16/195 [00:05<00:25, 7.06it/s] 9%|▉ | 18/195 [00:05<00:21, 8.31it/s] [2024-06-02 04:04:13] INFO huggingface_loader.py:174: [Not quantized] Parameter: "transformer.h.24.ln.weight", shape: (3072,), dtype: float32 9%|▉ | 18/195 [00:05<00:21, 8.31it/s] [2024-06-02 04:04:13] INFO huggingface_loader.py:166: [Quantized] Parameter: "transformer.h.24.mlp.down_proj.q_weight", shape: (3072, 1024), dtype: uint32 9%|▉ | 18/195 [00:05<00:21, 8.31it/s] [2024-06-02 04:04:13] INFO huggingface_loader.py:166: [Quantized] Parameter: "transformer.h.24.mlp.down_proj.q_scale", shape: (3072, 256), dtype: float32 9%|▉ | 18/195 [00:05<00:21, 8.31it/s] [2024-06-02 04:04:13] INFO huggingface_loader.py:166: [Quantized] Parameter: "transformer.h.24.mlp.gate_up_proj.q_weight", shape: (16384, 384), dtype: uint32 9%|▉ | 18/195 [00:05<00:21, 8.31it/s] [2024-06-02 04:04:13] INFO huggingface_loader.py:166: [Quantized] Parameter: "transformer.h.24.mlp.gate_up_proj.q_scale", shape: (16384, 96), dtype: float32 9%|▉ | 18/195 [00:05<00:21, 8.31it/s] 11%|█ | 21/195 [00:05<00:17, 10.19it/s] [2024-06-02 04:04:13] INFO huggingface_loader.py:174: [Not quantized] Parameter: "transformer.h.24.post_attention_layernorm.weight", shape: (3072,), dtype: float32 11%|█ | 21/195 [00:05<00:17, 10.19it/s] [2024-06-02 04:04:13] INFO huggingface_loader.py:166: [Quantized] Parameter: "transformer.h.24.mixer.out_proj.q_weight", shape: (3072, 384), dtype: uint32 11%|█ | 21/195 [00:05<00:17, 10.19it/s] [2024-06-02 04:04:13] INFO huggingface_loader.py:166: [Quantized] Parameter: "transformer.h.24.mixer.out_proj.q_scale", shape: (3072, 96), dtype: float32 11%|█ | 21/195 [00:05<00:17, 10.19it/s] 12%|█▏ | 23/195 [00:05<00:15, 10.94it/s] [2024-06-02 04:04:13] INFO huggingface_loader.py:166: [Quantized] Parameter: "transformer.h.24.mixer.qkv_proj.q_weight", shape: (9216, 384), dtype: uint32 12%|█▏ | 23/195 [00:05<00:15, 10.94it/s] [2024-06-02 04:04:13] INFO huggingface_loader.py:166: [Quantized] Parameter: "transformer.h.24.mixer.qkv_proj.q_scale", shape: (9216, 96), dtype: float32 12%|█▏ | 23/195 [00:05<00:15, 10.94it/s] [2024-06-02 04:04:13] INFO huggingface_loader.py:174: [Not quantized] Parameter: "transformer.h.25.ln.weight", shape: (3072,), dtype: float32 12%|█▏ | 23/195 [00:05<00:15, 10.94it/s] [2024-06-02 04:04:13] INFO huggingface_loader.py:166: [Quantized] Parameter: "transformer.h.25.mlp.down_proj.q_weight", shape: (3072, 1024), dtype: uint32 12%|█▏ | 23/195 [00:05<00:15, 10.94it/s] [2024-06-02 04:04:13] INFO huggingface_loader.py:166: [Quantized] Parameter: "transformer.h.25.mlp.down_proj.q_scale", shape: (3072, 256), dtype: float32 12%|█▏ | 23/195 [00:05<00:15, 10.94it/s] 13%|█▎ | 26/195 [00:05<00:12, 13.22it/s] [2024-06-02 04:04:13] INFO huggingface_loader.py:166: [Quantized] Parameter: "transformer.h.25.mlp.gate_up_proj.q_weight", shape: (16384, 384), dtype: uint32 13%|█▎ | 26/195 [00:05<00:12, 13.22it/s] [2024-06-02 04:04:13] INFO huggingface_loader.py:166: [Quantized] Parameter: "transformer.h.25.mlp.gate_up_proj.q_scale", shape: (16384, 96), dtype: float32 13%|█▎ | 26/195 [00:05<00:12, 13.22it/s] [2024-06-02 04:04:13] INFO huggingface_loader.py:174: [Not quantized] Parameter: "transformer.h.25.post_attention_layernorm.weight", shape: (3072,), dtype: float32 13%|█▎ | 26/195 [00:05<00:12, 13.22it/s] 14%|█▍ | 28/195 [00:05<00:12, 13.16it/s] [2024-06-02 04:04:13] INFO huggingface_loader.py:166: [Quantized] Parameter: "transformer.h.25.mixer.out_proj.q_weight", shape: (3072, 384), dtype: uint32 14%|█▍ | 28/195 [00:05<00:12, 13.16it/s] [2024-06-02 04:04:13] INFO huggingface_loader.py:166: [Quantized] Parameter: "transformer.h.25.mixer.out_proj.q_scale", shape: (3072, 96), dtype: float32 14%|█▍ | 28/195 [00:05<00:12, 13.16it/s] [2024-06-02 04:04:13] INFO huggingface_loader.py:166: [Quantized] Parameter: "transformer.h.25.mixer.qkv_proj.q_weight", shape: (9216, 384), dtype: uint32 14%|█▍ | 28/195 [00:05<00:12, 13.16it/s] [2024-06-02 04:04:13] INFO huggingface_loader.py:166: [Quantized] Parameter: "transformer.h.25.mixer.qkv_proj.q_scale", shape: (9216, 96), dtype: float32 14%|█▍ | 28/195 [00:05<00:12, 13.16it/s] 15%|█▌ | 30/195 [00:05<00:12, 13.74it/s] [2024-06-02 04:04:13] INFO huggingface_loader.py:174: [Not quantized] Parameter: "transformer.h.26.ln.weight", shape: (3072,), dtype: float32 15%|█▌ | 30/195 [00:05<00:12, 13.74it/s] [2024-06-02 04:04:13] INFO huggingface_loader.py:166: [Quantized] Parameter: "transformer.h.26.mlp.down_proj.q_weight", shape: (3072, 1024), dtype: uint32 15%|█▌ | 30/195 [00:06<00:12, 13.74it/s] [2024-06-02 04:04:13] INFO huggingface_loader.py:166: [Quantized] Parameter: "transformer.h.26.mlp.down_proj.q_scale", shape: (3072, 256), dtype: float32 15%|█▌ | 30/195 [00:06<00:12, 13.74it/s] [2024-06-02 04:04:13] INFO huggingface_loader.py:166: [Quantized] Parameter: "transformer.h.26.mlp.gate_up_proj.q_weight", shape: (16384, 384), dtype: uint32 15%|█▌ | 30/195 [00:06<00:12, 13.74it/s] [2024-06-02 04:04:13] INFO huggingface_loader.py:166: [Quantized] Parameter: "transformer.h.26.mlp.gate_up_proj.q_scale", shape: (16384, 96), dtype: float32 15%|█▌ | 30/195 [00:06<00:12, 13.74it/s] 17%|█▋ | 33/195 [00:06<00:10, 14.76it/s] [2024-06-02 04:04:14] INFO huggingface_loader.py:174: [Not quantized] Parameter: "transformer.h.26.post_attention_layernorm.weight", shape: (3072,), dtype: float32 17%|█▋ | 33/195 [00:06<00:10, 14.76it/s] [2024-06-02 04:04:14] INFO huggingface_loader.py:166: [Quantized] Parameter: "transformer.h.26.mixer.out_proj.q_weight", shape: (3072, 384), dtype: uint32 17%|█▋ | 33/195 [00:06<00:10, 14.76it/s] [2024-06-02 04:04:14] INFO huggingface_loader.py:166: [Quantized] Parameter: "transformer.h.26.mixer.out_proj.q_scale", shape: (3072, 96), dtype: float32 17%|█▋ | 33/195 [00:06<00:10, 14.76it/s] 18%|█▊ | 35/195 [00:06<00:11, 14.38it/s] [2024-06-02 04:04:14] INFO huggingface_loader.py:166: [Quantized] Parameter: "transformer.h.26.mixer.qkv_proj.q_weight", shape: (9216, 384), dtype: uint32 18%|█▊ | 35/195 [00:06<00:11, 14.38it/s] [2024-06-02 04:04:14] INFO huggingface_loader.py:166: [Quantized] Parameter: "transformer.h.26.mixer.qkv_proj.q_scale", shape: (9216, 96), dtype: float32 18%|█▊ | 35/195 [00:06<00:11, 14.38it/s] [2024-06-02 04:04:14] INFO huggingface_loader.py:174: [Not quantized] Parameter: "transformer.h.27.ln.weight", shape: (3072,), dtype: float32 18%|█▊ | 35/195 [00:06<00:11, 14.38it/s] [2024-06-02 04:04:14] INFO huggingface_loader.py:166: [Quantized] Parameter: "transformer.h.27.mlp.down_proj.q_weight", shape: (3072, 1024), dtype: uint32 18%|█▊ | 35/195 [00:06<00:11, 14.38it/s] [2024-06-02 04:04:14] INFO huggingface_loader.py:166: [Quantized] Parameter: "transformer.h.27.mlp.down_proj.q_scale", shape: (3072, 256), dtype: float32 18%|█▊ | 35/195 [00:06<00:11, 14.38it/s] 19%|█▉ | 38/195 [00:06<00:09, 16.25it/s] [2024-06-02 04:04:14] INFO huggingface_loader.py:166: [Quantized] Parameter: "transformer.h.27.mlp.gate_up_proj.q_weight", shape: (16384, 384), dtype: uint32 19%|█▉ | 38/195 [00:06<00:09, 16.25it/s] [2024-06-02 04:04:14] INFO huggingface_loader.py:166: [Quantized] Parameter: "transformer.h.27.mlp.gate_up_proj.q_scale", shape: (16384, 96), dtype: float32 19%|█▉ | 38/195 [00:06<00:09, 16.25it/s] [2024-06-02 04:04:14] INFO huggingface_loader.py:174: [Not quantized] Parameter: "transformer.h.27.post_attention_layernorm.weight", shape: (3072,), dtype: float32 19%|█▉ | 38/195 [00:06<00:09, 16.25it/s] 21%|██ | 40/195 [00:06<00:10, 15.26it/s] [2024-06-02 04:04:14] INFO huggingface_loader.py:166: [Quantized] Parameter: "transformer.h.27.mixer.out_proj.q_weight", shape: (3072, 384), dtype: uint32 21%|██ | 40/195 [00:06<00:10, 15.26it/s] [2024-06-02 04:04:14] INFO huggingface_loader.py:166: [Quantized] Parameter: "transformer.h.27.mixer.out_proj.q_scale", shape: (3072, 96), dtype: float32 21%|██ | 40/195 [00:06<00:10, 15.26it/s] [2024-06-02 04:04:14] INFO huggingface_loader.py:166: [Quantized] Parameter: "transformer.h.27.mixer.qkv_proj.q_weight", shape: (9216, 384), dtype: uint32 21%|██ | 40/195 [00:06<00:10, 15.26it/s] [2024-06-02 04:04:14] INFO huggingface_loader.py:166: [Quantized] Parameter: "transformer.h.27.mixer.qkv_proj.q_scale", shape: (9216, 96), dtype: float32 21%|██ | 40/195 [00:06<00:10, 15.26it/s] 22%|██▏ | 42/195 [00:06<00:09, 15.38it/s] [2024-06-02 04:04:14] INFO huggingface_loader.py:174: [Not quantized] Parameter: "transformer.h.28.ln.weight", shape: (3072,), dtype: float32 22%|██▏ | 42/195 [00:06<00:09, 15.38it/s] [2024-06-02 04:04:14] INFO huggingface_loader.py:166: [Quantized] Parameter: "transformer.h.28.mlp.down_proj.q_weight", shape: (3072, 1024), dtype: uint32 22%|██▏ | 42/195 [00:06<00:09, 15.38it/s] [2024-06-02 04:04:14] INFO huggingface_loader.py:166: [Quantized] Parameter: "transformer.h.28.mlp.down_proj.q_scale", shape: (3072, 256), dtype: float32 22%|██▏ | 42/195 [00:06<00:09, 15.38it/s] [2024-06-02 04:04:14] INFO huggingface_loader.py:166: [Quantized] Parameter: "transformer.h.28.mlp.gate_up_proj.q_weight", shape: (16384, 384), dtype: uint32 22%|██▏ | 42/195 [00:06<00:09, 15.38it/s] [2024-06-02 04:04:14] INFO huggingface_loader.py:166: [Quantized] Parameter: "transformer.h.28.mlp.gate_up_proj.q_scale", shape: (16384, 96), dtype: float32 22%|██▏ | 42/195 [00:06<00:09, 15.38it/s] 23%|██▎ | 45/195 [00:06<00:09, 15.87it/s] [2024-06-02 04:04:14] INFO huggingface_loader.py:174: [Not quantized] Parameter: "transformer.h.28.post_attention_layernorm.weight", shape: (3072,), dtype: float32 23%|██▎ | 45/195 [00:06<00:09, 15.87it/s] [2024-06-02 04:04:14] INFO huggingface_loader.py:166: [Quantized] Parameter: "transformer.h.28.mixer.out_proj.q_weight", shape: (3072, 384), dtype: uint32 23%|██▎ | 45/195 [00:06<00:09, 15.87it/s] [2024-06-02 04:04:14] INFO huggingface_loader.py:166: [Quantized] Parameter: "transformer.h.28.mixer.out_proj.q_scale", shape: (3072, 96), dtype: float32 23%|██▎ | 45/195 [00:07<00:09, 15.87it/s] 24%|██▍ | 47/195 [00:07<00:09, 15.18it/s] [2024-06-02 04:04:14] INFO huggingface_loader.py:166: [Quantized] Parameter: "transformer.h.28.mixer.qkv_proj.q_weight", shape: (9216, 384), dtype: uint32 24%|██▍ | 47/195 [00:07<00:09, 15.18it/s] [2024-06-02 04:04:14] INFO huggingface_loader.py:166: [Quantized] Parameter: "transformer.h.28.mixer.qkv_proj.q_scale", shape: (9216, 96), dtype: float32 24%|██▍ | 47/195 [00:07<00:09, 15.18it/s] [2024-06-02 04:04:14] INFO huggingface_loader.py:174: [Not quantized] Parameter: "transformer.h.29.ln.weight", shape: (3072,), dtype: float32 24%|██▍ | 47/195 [00:07<00:09, 15.18it/s] [2024-06-02 04:04:14] INFO huggingface_loader.py:166: [Quantized] Parameter: "transformer.h.29.mlp.down_proj.q_weight", shape: (3072, 1024), dtype: uint32 24%|██▍ | 47/195 [00:07<00:09, 15.18it/s] [2024-06-02 04:04:15] INFO huggingface_loader.py:166: [Quantized] Parameter: "transformer.h.29.mlp.down_proj.q_scale", shape: (3072, 256), dtype: float32 24%|██▍ | 47/195 [00:07<00:09, 15.18it/s] 26%|██▌ | 50/195 [00:07<00:08, 16.86it/s] [2024-06-02 04:04:15] INFO huggingface_loader.py:166: [Quantized] Parameter: "transformer.h.29.mlp.gate_up_proj.q_weight", shape: (16384, 384), dtype: uint32 26%|██▌ | 50/195 [00:07<00:08, 16.86it/s] [2024-06-02 04:04:15] INFO huggingface_loader.py:166: [Quantized] Parameter: "transformer.h.29.mlp.gate_up_proj.q_scale", shape: (16384, 96), dtype: float32 26%|██▌ | 50/195 [00:07<00:08, 16.86it/s] [2024-06-02 04:04:15] INFO huggingface_loader.py:174: [Not quantized] Parameter: "transformer.h.29.post_attention_layernorm.weight", shape: (3072,), dtype: float32 26%|██▌ | 50/195 [00:07<00:08, 16.86it/s] 27%|██▋ | 52/195 [00:07<00:09, 15.70it/s] [2024-06-02 04:04:15] INFO huggingface_loader.py:166: [Quantized] Parameter: "transformer.h.29.mixer.out_proj.q_weight", shape: (3072, 384), dtype: uint32 27%|██▋ | 52/195 [00:07<00:09, 15.70it/s] [2024-06-02 04:04:15] INFO huggingface_loader.py:166: [Quantized] Parameter: "transformer.h.29.mixer.out_proj.q_scale", shape: (3072, 96), dtype: float32 27%|██▋ | 52/195 [00:07<00:09, 15.70it/s] [2024-06-02 04:04:15] INFO huggingface_loader.py:166: [Quantized] Parameter: "transformer.h.29.mixer.qkv_proj.q_weight", shape: (9216, 384), dtype: uint32 27%|██▋ | 52/195 [00:07<00:09, 15.70it/s] [2024-06-02 04:04:15] INFO huggingface_loader.py:166: [Quantized] Parameter: "transformer.h.29.mixer.qkv_proj.q_scale", shape: (9216, 96), dtype: float32 27%|██▋ | 52/195 [00:07<00:09, 15.70it/s] 28%|██▊ | 54/195 [00:07<00:08, 15.74it/s] [2024-06-02 04:04:15] INFO huggingface_loader.py:174: [Not quantized] Parameter: "transformer.h.30.ln.weight", shape: (3072,), dtype: float32 28%|██▊ | 54/195 [00:07<00:08, 15.74it/s] [2024-06-02 04:04:15] INFO huggingface_loader.py:166: [Quantized] Parameter: "transformer.h.30.mlp.down_proj.q_weight", shape: (3072, 1024), dtype: uint32 28%|██▊ | 54/195 [00:07<00:08, 15.74it/s] [2024-06-02 04:04:15] INFO huggingface_loader.py:166: [Quantized] Parameter: "transformer.h.30.mlp.down_proj.q_scale", shape: (3072, 256), dtype: float32 28%|██▊ | 54/195 [00:07<00:08, 15.74it/s] [2024-06-02 04:04:15] INFO huggingface_loader.py:166: [Quantized] Parameter: "transformer.h.30.mlp.gate_up_proj.q_weight", shape: (16384, 384), dtype: uint32 28%|██▊ | 54/195 [00:07<00:08, 15.74it/s] [2024-06-02 04:04:15] INFO huggingface_loader.py:166: [Quantized] Parameter: "transformer.h.30.mlp.gate_up_proj.q_scale", shape: (16384, 96), dtype: float32 28%|██▊ | 54/195 [00:07<00:08, 15.74it/s] 29%|██▉ | 57/195 [00:07<00:08, 16.13it/s] [2024-06-02 04:04:15] INFO huggingface_loader.py:174: [Not quantized] Parameter: "transformer.h.30.post_attention_layernorm.weight", shape: (3072,), dtype: float32 29%|██▉ | 57/195 [00:07<00:08, 16.13it/s] [2024-06-02 04:04:15] INFO huggingface_loader.py:166: [Quantized] Parameter: "transformer.h.30.mixer.out_proj.q_weight", shape: (3072, 384), dtype: uint32 29%|██▉ | 57/195 [00:07<00:08, 16.13it/s] [2024-06-02 04:04:15] INFO huggingface_loader.py:166: [Quantized] Parameter: "transformer.h.30.mixer.out_proj.q_scale", shape: (3072, 96), dtype: float32 29%|██▉ | 57/195 [00:07<00:08, 16.13it/s] 30%|███ | 59/195 [00:07<00:08, 15.46it/s] [2024-06-02 04:04:15] INFO huggingface_loader.py:166: [Quantized] Parameter: "transformer.h.30.mixer.qkv_proj.q_weight", shape: (9216, 384), dtype: uint32 30%|███ | 59/195 [00:07<00:08, 15.46it/s] [2024-06-02 04:04:15] INFO huggingface_loader.py:166: [Quantized] Parameter: "transformer.h.30.mixer.qkv_proj.q_scale", shape: (9216, 96), dtype: float32 30%|███ | 59/195 [00:07<00:08, 15.46it/s] [2024-06-02 04:04:15] INFO huggingface_loader.py:174: [Not quantized] Parameter: "transformer.h.31.ln.weight", shape: (3072,), dtype: float32 30%|███ | 59/195 [00:07<00:08, 15.46it/s] [2024-06-02 04:04:15] INFO huggingface_loader.py:166: [Quantized] Parameter: "transformer.h.31.mlp.down_proj.q_weight", shape: (3072, 1024), dtype: uint32 30%|███ | 59/195 [00:07<00:08, 15.46it/s] [2024-06-02 04:04:15] INFO huggingface_loader.py:166: [Quantized] Parameter: "transformer.h.31.mlp.down_proj.q_scale", shape: (3072, 256), dtype: float32 30%|███ | 59/195 [00:07<00:08, 15.46it/s] 32%|███▏ | 62/195 [00:07<00:07, 17.07it/s] [2024-06-02 04:04:15] INFO huggingface_loader.py:166: [Quantized] Parameter: "transformer.h.31.mlp.gate_up_proj.q_weight", shape: (16384, 384), dtype: uint32 32%|███▏ | 62/195 [00:07<00:07, 17.07it/s] [2024-06-02 04:04:15] INFO huggingface_loader.py:166: [Quantized] Parameter: "transformer.h.31.mlp.gate_up_proj.q_scale", shape: (16384, 96), dtype: float32 32%|███▏ | 62/195 [00:08<00:07, 17.07it/s] [2024-06-02 04:04:15] INFO huggingface_loader.py:174: [Not quantized] Parameter: "transformer.h.31.post_attention_layernorm.weight", shape: (3072,), dtype: float32 32%|███▏ | 62/195 [00:08<00:07, 17.07it/s] 33%|███▎ | 64/195 [00:08<00:08, 15.74it/s] [2024-06-02 04:04:15] INFO huggingface_loader.py:166: [Quantized] Parameter: "transformer.h.31.mixer.out_proj.q_weight", shape: (3072, 384), dtype: uint32 33%|███▎ | 64/195 [00:08<00:08, 15.74it/s] [2024-06-02 04:04:15] INFO huggingface_loader.py:166: [Quantized] Parameter: "transformer.h.31.mixer.out_proj.q_scale", shape: (3072, 96), dtype: float32 33%|███▎ | 64/195 [00:08<00:08, 15.74it/s] [2024-06-02 04:04:15] INFO huggingface_loader.py:166: [Quantized] Parameter: "transformer.h.31.mixer.qkv_proj.q_weight", shape: (9216, 384), dtype: uint32 33%|███▎ | 64/195 [00:08<00:08, 15.74it/s] [2024-06-02 04:04:16] INFO huggingface_loader.py:166: [Quantized] Parameter: "transformer.h.31.mixer.qkv_proj.q_scale", shape: (9216, 96), dtype: float32 33%|███▎ | 64/195 [00:08<00:08, 15.74it/s] 34%|███▍ | 66/195 [00:08<00:08, 15.69it/s] [2024-06-02 04:04:16] INFO huggingface_loader.py:174: [Not quantized] Parameter: "transformer.norm.weight", shape: (3072,), dtype: float32 34%|███▍ | 66/195 [00:08<00:08, 15.69it/s] [2024-06-02 04:04:16] INFO huggingface_loader.py:196: Unloading HF weight file: /ssd1/rickzhou/models/Phi-3-mini-128k-instruct/model-00002-of-00002.safetensors 34%|███▍ | 66/195 [00:08<00:08, 15.69it/s] [2024-06-02 04:04:16] INFO huggingface_loader.py:184: Loading HF parameters from: /ssd1/rickzhou/models/Phi-3-mini-128k-instruct/model-00001-of-00002.safetensors 34%|███▍ | 66/195 [00:08<00:08, 15.69it/s] [2024-06-02 04:04:17] INFO huggingface_loader.py:166: [Quantized] Parameter: "transformer.embd.q_weight", shape: (32064, 384), dtype: uint32 34%|███▍ | 66/195 [00:09<00:08, 15.69it/s] [2024-06-02 04:04:17] INFO huggingface_loader.py:166: [Quantized] Parameter: "transformer.embd.q_scale", shape: (32064, 96), dtype: float32 34%|███▍ | 66/195 [00:09<00:08, 15.69it/s] 35%|███▍ | 68/195 [00:09<00:26, 4.71it/s] [2024-06-02 04:04:17] INFO huggingface_loader.py:174: [Not quantized] Parameter: "transformer.h.0.ln.weight", shape: (3072,), dtype: float32 35%|███▍ | 68/195 [00:09<00:26, 4.71it/s] [2024-06-02 04:04:17] INFO huggingface_loader.py:166: [Quantized] Parameter: "transformer.h.0.mlp.down_proj.q_weight", shape: (3072, 1024), dtype: uint32 35%|███▍ | 68/195 [00:09<00:26, 4.71it/s] [2024-06-02 04:04:17] INFO huggingface_loader.py:166: [Quantized] Parameter: "transformer.h.0.mlp.down_proj.q_scale", shape: (3072, 256), dtype: float32 35%|███▍ | 68/195 [00:09<00:26, 4.71it/s] 36%|███▌ | 70/195 [00:09<00:21, 5.87it/s] [2024-06-02 04:04:17] INFO huggingface_loader.py:166: [Quantized] Parameter: "transformer.h.0.mlp.gate_up_proj.q_weight", shape: (16384, 384), dtype: uint32 36%|███▌ | 70/195 [00:09<00:21, 5.87it/s] [2024-06-02 04:04:17] INFO huggingface_loader.py:166: [Quantized] Parameter: "transformer.h.0.mlp.gate_up_proj.q_scale", shape: (16384, 96), dtype: float32 36%|███▌ | 70/195 [00:09<00:21, 5.87it/s] [2024-06-02 04:04:17] INFO huggingface_loader.py:174: [Not quantized] Parameter: "transformer.h.0.post_attention_layernorm.weight", shape: (3072,), dtype: float32 36%|███▌ | 70/195 [00:09<00:21, 5.87it/s] 37%|███▋ | 72/195 [00:09<00:17, 6.96it/s] [2024-06-02 04:04:17] INFO huggingface_loader.py:166: [Quantized] Parameter: "transformer.h.0.mixer.out_proj.q_weight", shape: (3072, 384), dtype: uint32 37%|███▋ | 72/195 [00:09<00:17, 6.96it/s] [2024-06-02 04:04:17] INFO huggingface_loader.py:166: [Quantized] Parameter: "transformer.h.0.mixer.out_proj.q_scale", shape: (3072, 96), dtype: float32 37%|███▋ | 72/195 [00:09<00:17, 6.96it/s] [2024-06-02 04:04:17] INFO huggingface_loader.py:166: [Quantized] Parameter: "transformer.h.0.mixer.qkv_proj.q_weight", shape: (9216, 384), dtype: uint32 37%|███▋ | 72/195 [00:09<00:17, 6.96it/s] [2024-06-02 04:04:17] INFO huggingface_loader.py:166: [Quantized] Parameter: "transformer.h.0.mixer.qkv_proj.q_scale", shape: (9216, 96), dtype: float32 37%|███▋ | 72/195 [00:09<00:17, 6.96it/s] 38%|███▊ | 74/195 [00:09<00:14, 8.21it/s] [2024-06-02 04:04:17] INFO huggingface_loader.py:174: [Not quantized] Parameter: "transformer.h.1.ln.weight", shape: (3072,), dtype: float32 38%|███▊ | 74/195 [00:09<00:14, 8.21it/s] [2024-06-02 04:04:17] INFO huggingface_loader.py:166: [Quantized] Parameter: "transformer.h.1.mlp.down_proj.q_weight", shape: (3072, 1024), dtype: uint32 38%|███▊ | 74/195 [00:09<00:14, 8.21it/s] [2024-06-02 04:04:17] INFO huggingface_loader.py:166: [Quantized] Parameter: "transformer.h.1.mlp.down_proj.q_scale", shape: (3072, 256), dtype: float32 38%|███▊ | 74/195 [00:09<00:14, 8.21it/s] [2024-06-02 04:04:17] INFO huggingface_loader.py:166: [Quantized] Parameter: "transformer.h.1.mlp.gate_up_proj.q_weight", shape: (16384, 384), dtype: uint32 38%|███▊ | 74/195 [00:09<00:14, 8.21it/s] [2024-06-02 04:04:17] INFO huggingface_loader.py:166: [Quantized] Parameter: "transformer.h.1.mlp.gate_up_proj.q_scale", shape: (16384, 96), dtype: float32 38%|███▊ | 74/195 [00:10<00:14, 8.21it/s] 39%|███▉ | 77/195 [00:10<00:11, 10.20it/s] [2024-06-02 04:04:17] INFO huggingface_loader.py:174: [Not quantized] Parameter: "transformer.h.1.post_attention_layernorm.weight", shape: (3072,), dtype: float32 39%|███▉ | 77/195 [00:10<00:11, 10.20it/s] [2024-06-02 04:04:17] INFO huggingface_loader.py:166: [Quantized] Parameter: "transformer.h.1.mixer.out_proj.q_weight", shape: (3072, 384), dtype: uint32 39%|███▉ | 77/195 [00:10<00:11, 10.20it/s] [2024-06-02 04:04:18] INFO huggingface_loader.py:166: [Quantized] Parameter: "transformer.h.1.mixer.out_proj.q_scale", shape: (3072, 96), dtype: float32 39%|███▉ | 77/195 [00:10<00:11, 10.20it/s] 41%|████ | 79/195 [00:10<00:10, 11.01it/s] [2024-06-02 04:04:18] INFO huggingface_loader.py:166: [Quantized] Parameter: "transformer.h.1.mixer.qkv_proj.q_weight", shape: (9216, 384), dtype: uint32 41%|████ | 79/195 [00:10<00:10, 11.01it/s] [2024-06-02 04:04:18] INFO huggingface_loader.py:166: [Quantized] Parameter: "transformer.h.1.mixer.qkv_proj.q_scale", shape: (9216, 96), dtype: float32 41%|████ | 79/195 [00:10<00:10, 11.01it/s] [2024-06-02 04:04:18] INFO huggingface_loader.py:174: [Not quantized] Parameter: "transformer.h.10.ln.weight", shape: (3072,), dtype: float32 41%|████ | 79/195 [00:10<00:10, 11.01it/s] [2024-06-02 04:04:18] INFO huggingface_loader.py:166: [Quantized] Parameter: "transformer.h.10.mlp.down_proj.q_weight", shape: (3072, 1024), dtype: uint32 41%|████ | 79/195 [00:10<00:10, 11.01it/s] [2024-06-02 04:04:18] INFO huggingface_loader.py:166: [Quantized] Parameter: "transformer.h.10.mlp.down_proj.q_scale", shape: (3072, 256), dtype: float32 41%|████ | 79/195 [00:10<00:10, 11.01it/s] 42%|████▏ | 82/195 [00:10<00:08, 13.24it/s] [2024-06-02 04:04:18] INFO huggingface_loader.py:166: [Quantized] Parameter: "transformer.h.10.mlp.gate_up_proj.q_weight", shape: (16384, 384), dtype: uint32 42%|████▏ | 82/195 [00:10<00:08, 13.24it/s] [2024-06-02 04:04:18] INFO huggingface_loader.py:166: [Quantized] Parameter: "transformer.h.10.mlp.gate_up_proj.q_scale", shape: (16384, 96), dtype: float32 42%|████▏ | 82/195 [00:10<00:08, 13.24it/s] [2024-06-02 04:04:18] INFO huggingface_loader.py:174: [Not quantized] Parameter: "transformer.h.10.post_attention_layernorm.weight", shape: (3072,), dtype: float32 42%|████▏ | 82/195 [00:10<00:08, 13.24it/s] 43%|████▎ | 84/195 [00:10<00:08, 13.21it/s] [2024-06-02 04:04:18] INFO huggingface_loader.py:166: [Quantized] Parameter: "transformer.h.10.mixer.out_proj.q_weight", shape: (3072, 384), dtype: uint32 43%|████▎ | 84/195 [00:10<00:08, 13.21it/s] [2024-06-02 04:04:18] INFO huggingface_loader.py:166: [Quantized] Parameter: "transformer.h.10.mixer.out_proj.q_scale", shape: (3072, 96), dtype: float32 43%|████▎ | 84/195 [00:10<00:08, 13.21it/s] [2024-06-02 04:04:18] INFO huggingface_loader.py:166: [Quantized] Parameter: "transformer.h.10.mixer.qkv_proj.q_weight", shape: (9216, 384), dtype: uint32 43%|████▎ | 84/195 [00:10<00:08, 13.21it/s] [2024-06-02 04:04:18] INFO huggingface_loader.py:166: [Quantized] Parameter: "transformer.h.10.mixer.qkv_proj.q_scale", shape: (9216, 96), dtype: float32 43%|████▎ | 84/195 [00:10<00:08, 13.21it/s] 44%|████▍ | 86/195 [00:10<00:07, 13.76it/s] [2024-06-02 04:04:18] INFO huggingface_loader.py:174: [Not quantized] Parameter: "transformer.h.11.ln.weight", shape: (3072,), dtype: float32 44%|████▍ | 86/195 [00:10<00:07, 13.76it/s] [2024-06-02 04:04:18] INFO huggingface_loader.py:166: [Quantized] Parameter: "transformer.h.11.mlp.down_proj.q_weight", shape: (3072, 1024), dtype: uint32 44%|████▍ | 86/195 [00:10<00:07, 13.76it/s] [2024-06-02 04:04:18] INFO huggingface_loader.py:166: [Quantized] Parameter: "transformer.h.11.mlp.down_proj.q_scale", shape: (3072, 256), dtype: float32 44%|████▍ | 86/195 [00:10<00:07, 13.76it/s] [2024-06-02 04:04:18] INFO huggingface_loader.py:166: [Quantized] Parameter: "transformer.h.11.mlp.gate_up_proj.q_weight", shape: (16384, 384), dtype: uint32 44%|████▍ | 86/195 [00:10<00:07, 13.76it/s] [2024-06-02 04:04:18] INFO huggingface_loader.py:166: [Quantized] Parameter: "transformer.h.11.mlp.gate_up_proj.q_scale", shape: (16384, 96), dtype: float32 44%|████▍ | 86/195 [00:10<00:07, 13.76it/s] 46%|████▌ | 89/195 [00:10<00:07, 14.74it/s] [2024-06-02 04:04:18] INFO huggingface_loader.py:174: [Not quantized] Parameter: "transformer.h.11.post_attention_layernorm.weight", shape: (3072,), dtype: float32 46%|████▌ | 89/195 [00:10<00:07, 14.74it/s] [2024-06-02 04:04:18] INFO huggingface_loader.py:166: [Quantized] Parameter: "transformer.h.11.mixer.out_proj.q_weight", shape: (3072, 384), dtype: uint32 46%|████▌ | 89/195 [00:10<00:07, 14.74it/s] [2024-06-02 04:04:18] INFO huggingface_loader.py:166: [Quantized] Parameter: "transformer.h.11.mixer.out_proj.q_scale", shape: (3072, 96), dtype: float32 46%|████▌ | 89/195 [00:10<00:07, 14.74it/s] 47%|████▋ | 91/195 [00:10<00:07, 14.37it/s] [2024-06-02 04:04:18] INFO huggingface_loader.py:166: [Quantized] Parameter: "transformer.h.11.mixer.qkv_proj.q_weight", shape: (9216, 384), dtype: uint32 47%|████▋ | 91/195 [00:10<00:07, 14.37it/s] [2024-06-02 04:04:18] INFO huggingface_loader.py:166: [Quantized] Parameter: "transformer.h.11.mixer.qkv_proj.q_scale", shape: (9216, 96), dtype: float32 47%|████▋ | 91/195 [00:10<00:07, 14.37it/s] [2024-06-02 04:04:18] INFO huggingface_loader.py:174: [Not quantized] Parameter: "transformer.h.12.ln.weight", shape: (3072,), dtype: float32 47%|████▋ | 91/195 [00:10<00:07, 14.37it/s] [2024-06-02 04:04:18] INFO huggingface_loader.py:166: [Quantized] Parameter: "transformer.h.12.mlp.down_proj.q_weight", shape: (3072, 1024), dtype: uint32 47%|████▋ | 91/195 [00:10<00:07, 14.37it/s] [2024-06-02 04:04:18] INFO huggingface_loader.py:166: [Quantized] Parameter: "transformer.h.12.mlp.down_proj.q_scale", shape: (3072, 256), dtype: float32 47%|████▋ | 91/195 [00:11<00:07, 14.37it/s] 48%|████▊ | 94/195 [00:11<00:06, 16.05it/s] [2024-06-02 04:04:18] INFO huggingface_loader.py:166: [Quantized] Parameter: "transformer.h.12.mlp.gate_up_proj.q_weight", shape: (16384, 384), dtype: uint32 48%|████▊ | 94/195 [00:11<00:06, 16.05it/s] [2024-06-02 04:04:19] INFO huggingface_loader.py:166: [Quantized] Parameter: "transformer.h.12.mlp.gate_up_proj.q_scale", shape: (16384, 96), dtype: float32 48%|████▊ | 94/195 [00:11<00:06, 16.05it/s] [2024-06-02 04:04:19] INFO huggingface_loader.py:174: [Not quantized] Parameter: "transformer.h.12.post_attention_layernorm.weight", shape: (3072,), dtype: float32 48%|████▊ | 94/195 [00:11<00:06, 16.05it/s] 49%|████▉ | 96/195 [00:11<00:06, 15.19it/s] [2024-06-02 04:04:19] INFO huggingface_loader.py:166: [Quantized] Parameter: "transformer.h.12.mixer.out_proj.q_weight", shape: (3072, 384), dtype: uint32 49%|████▉ | 96/195 [00:11<00:06, 15.19it/s] [2024-06-02 04:04:19] INFO huggingface_loader.py:166: [Quantized] Parameter: "transformer.h.12.mixer.out_proj.q_scale", shape: (3072, 96), dtype: float32 49%|████▉ | 96/195 [00:11<00:06, 15.19it/s] [2024-06-02 04:04:19] INFO huggingface_loader.py:166: [Quantized] Parameter: "transformer.h.12.mixer.qkv_proj.q_weight", shape: (9216, 384), dtype: uint32 49%|████▉ | 96/195 [00:11<00:06, 15.19it/s] [2024-06-02 04:04:19] INFO huggingface_loader.py:166: [Quantized] Parameter: "transformer.h.12.mixer.qkv_proj.q_scale", shape: (9216, 96), dtype: float32 49%|████▉ | 96/195 [00:11<00:06, 15.19it/s] 50%|█████ | 98/195 [00:11<00:06, 15.20it/s] [2024-06-02 04:04:19] INFO huggingface_loader.py:174: [Not quantized] Parameter: "transformer.h.13.ln.weight", shape: (3072,), dtype: float32 50%|█████ | 98/195 [00:11<00:06, 15.20it/s] [2024-06-02 04:04:19] INFO huggingface_loader.py:166: [Quantized] Parameter: "transformer.h.13.mlp.down_proj.q_weight", shape: (3072, 1024), dtype: uint32 50%|█████ | 98/195 [00:11<00:06, 15.20it/s] [2024-06-02 04:04:19] INFO huggingface_loader.py:166: [Quantized] Parameter: "transformer.h.13.mlp.down_proj.q_scale", shape: (3072, 256), dtype: float32 50%|█████ | 98/195 [00:11<00:06, 15.20it/s] [2024-06-02 04:04:19] INFO huggingface_loader.py:166: [Quantized] Parameter: "transformer.h.13.mlp.gate_up_proj.q_weight", shape: (16384, 384), dtype: uint32 50%|█████ | 98/195 [00:11<00:06, 15.20it/s] [2024-06-02 04:04:19] INFO huggingface_loader.py:166: [Quantized] Parameter: "transformer.h.13.mlp.gate_up_proj.q_scale", shape: (16384, 96), dtype: float32 50%|█████ | 98/195 [00:11<00:06, 15.20it/s] 52%|█████▏ | 101/195 [00:11<00:05, 15.70it/s] [2024-06-02 04:04:19] INFO huggingface_loader.py:174: [Not quantized] Parameter: "transformer.h.13.post_attention_layernorm.weight", shape: (3072,), dtype: float32 52%|█████▏ | 101/195 [00:11<00:05, 15.70it/s] [2024-06-02 04:04:19] INFO huggingface_loader.py:166: [Quantized] Parameter: "transformer.h.13.mixer.out_proj.q_weight", shape: (3072, 384), dtype: uint32 52%|█████▏ | 101/195 [00:11<00:05, 15.70it/s] [2024-06-02 04:04:19] INFO huggingface_loader.py:166: [Quantized] Parameter: "transformer.h.13.mixer.out_proj.q_scale", shape: (3072, 96), dtype: float32 52%|█████▏ | 101/195 [00:11<00:05, 15.70it/s] 53%|█████▎ | 103/195 [00:11<00:06, 15.03it/s] [2024-06-02 04:04:19] INFO huggingface_loader.py:166: [Quantized] Parameter: "transformer.h.13.mixer.qkv_proj.q_weight", shape: (9216, 384), dtype: uint32 53%|█████▎ | 103/195 [00:11<00:06, 15.03it/s] [2024-06-02 04:04:19] INFO huggingface_loader.py:166: [Quantized] Parameter: "transformer.h.13.mixer.qkv_proj.q_scale", shape: (9216, 96), dtype: float32 53%|█████▎ | 103/195 [00:11<00:06, 15.03it/s] [2024-06-02 04:04:19] INFO huggingface_loader.py:174: [Not quantized] Parameter: "transformer.h.14.ln.weight", shape: (3072,), dtype: float32 53%|█████▎ | 103/195 [00:11<00:06, 15.03it/s] [2024-06-02 04:04:19] INFO huggingface_loader.py:166: [Quantized] Parameter: "transformer.h.14.mlp.down_proj.q_weight", shape: (3072, 1024), dtype: uint32 53%|█████▎ | 103/195 [00:11<00:06, 15.03it/s] [2024-06-02 04:04:19] INFO huggingface_loader.py:166: [Quantized] Parameter: "transformer.h.14.mlp.down_proj.q_scale", shape: (3072, 256), dtype: float32 53%|█████▎ | 103/195 [00:11<00:06, 15.03it/s] 54%|█████▍ | 106/195 [00:11<00:05, 16.62it/s] [2024-06-02 04:04:19] INFO huggingface_loader.py:166: [Quantized] Parameter: "transformer.h.14.mlp.gate_up_proj.q_weight", shape: (16384, 384), dtype: uint32 54%|█████▍ | 106/195 [00:11<00:05, 16.62it/s] [2024-06-02 04:04:19] INFO huggingface_loader.py:166: [Quantized] Parameter: "transformer.h.14.mlp.gate_up_proj.q_scale", shape: (16384, 96), dtype: float32 54%|█████▍ | 106/195 [00:11<00:05, 16.62it/s] [2024-06-02 04:04:19] INFO huggingface_loader.py:174: [Not quantized] Parameter: "transformer.h.14.post_attention_layernorm.weight", shape: (3072,), dtype: float32 54%|█████▍ | 106/195 [00:12<00:05, 16.62it/s] 55%|█████▌ | 108/195 [00:12<00:05, 15.57it/s] [2024-06-02 04:04:19] INFO huggingface_loader.py:166: [Quantized] Parameter: "transformer.h.14.mixer.out_proj.q_weight", shape: (3072, 384), dtype: uint32 55%|█████▌ | 108/195 [00:12<00:05, 15.57it/s] [2024-06-02 04:04:19] INFO huggingface_loader.py:166: [Quantized] Parameter: "transformer.h.14.mixer.out_proj.q_scale", shape: (3072, 96), dtype: float32 55%|█████▌ | 108/195 [00:12<00:05, 15.57it/s] [2024-06-02 04:04:19] INFO huggingface_loader.py:166: [Quantized] Parameter: "transformer.h.14.mixer.qkv_proj.q_weight", shape: (9216, 384), dtype: uint32 55%|█████▌ | 108/195 [00:12<00:05, 15.57it/s] [2024-06-02 04:04:19] INFO huggingface_loader.py:166: [Quantized] Parameter: "transformer.h.14.mixer.qkv_proj.q_scale", shape: (9216, 96), dtype: float32 55%|█████▌ | 108/195 [00:12<00:05, 15.57it/s] 56%|█████▋ | 110/195 [00:12<00:05, 15.52it/s] [2024-06-02 04:04:19] INFO huggingface_loader.py:174: [Not quantized] Parameter: "transformer.h.15.ln.weight", shape: (3072,), dtype: float32 56%|█████▋ | 110/195 [00:12<00:05, 15.52it/s] [2024-06-02 04:04:19] INFO huggingface_loader.py:166: [Quantized] Parameter: "transformer.h.15.mlp.down_proj.q_weight", shape: (3072, 1024), dtype: uint32 56%|█████▋ | 110/195 [00:12<00:05, 15.52it/s] [2024-06-02 04:04:19] INFO huggingface_loader.py:166: [Quantized] Parameter: "transformer.h.15.mlp.down_proj.q_scale", shape: (3072, 256), dtype: float32 56%|█████▋ | 110/195 [00:12<00:05, 15.52it/s] [2024-06-02 04:04:20] INFO huggingface_loader.py:166: [Quantized] Parameter: "transformer.h.15.mlp.gate_up_proj.q_weight", shape: (16384, 384), dtype: uint32 56%|█████▋ | 110/195 [00:12<00:05, 15.52it/s] [2024-06-02 04:04:20] INFO huggingface_loader.py:166: [Quantized] Parameter: "transformer.h.15.mlp.gate_up_proj.q_scale", shape: (16384, 96), dtype: float32 56%|█████▋ | 110/195 [00:12<00:05, 15.52it/s] 58%|█████▊ | 113/195 [00:12<00:05, 15.95it/s] [2024-06-02 04:04:20] INFO huggingface_loader.py:174: [Not quantized] Parameter: "transformer.h.15.post_attention_layernorm.weight", shape: (3072,), dtype: float32 58%|█████▊ | 113/195 [00:12<00:05, 15.95it/s] [2024-06-02 04:04:20] INFO huggingface_loader.py:166: [Quantized] Parameter: "transformer.h.15.mixer.out_proj.q_weight", shape: (3072, 384), dtype: uint32 58%|█████▊ | 113/195 [00:12<00:05, 15.95it/s] [2024-06-02 04:04:20] INFO huggingface_loader.py:166: [Quantized] Parameter: "transformer.h.15.mixer.out_proj.q_scale", shape: (3072, 96), dtype: float32 58%|█████▊ | 113/195 [00:12<00:05, 15.95it/s] 59%|█████▉ | 115/195 [00:12<00:05, 15.22it/s] [2024-06-02 04:04:20] INFO huggingface_loader.py:166: [Quantized] Parameter: "transformer.h.15.mixer.qkv_proj.q_weight", shape: (9216, 384), dtype: uint32 59%|█████▉ | 115/195 [00:12<00:05, 15.22it/s] [2024-06-02 04:04:20] INFO huggingface_loader.py:166: [Quantized] Parameter: "transformer.h.15.mixer.qkv_proj.q_scale", shape: (9216, 96), dtype: float32 59%|█████▉ | 115/195 [00:12<00:05, 15.22it/s] [2024-06-02 04:04:20] INFO huggingface_loader.py:174: [Not quantized] Parameter: "transformer.h.16.ln.weight", shape: (3072,), dtype: float32 59%|█████▉ | 115/195 [00:12<00:05, 15.22it/s] [2024-06-02 04:04:20] INFO huggingface_loader.py:166: [Quantized] Parameter: "transformer.h.16.mlp.down_proj.q_weight", shape: (3072, 1024), dtype: uint32 59%|█████▉ | 115/195 [00:12<00:05, 15.22it/s] [2024-06-02 04:04:20] INFO huggingface_loader.py:166: [Quantized] Parameter: "transformer.h.16.mlp.down_proj.q_scale", shape: (3072, 256), dtype: float32 59%|█████▉ | 115/195 [00:12<00:05, 15.22it/s] 61%|██████ | 118/195 [00:12<00:04, 16.78it/s] [2024-06-02 04:04:20] INFO huggingface_loader.py:166: [Quantized] Parameter: "transformer.h.16.mlp.gate_up_proj.q_weight", shape: (16384, 384), dtype: uint32 61%|██████ | 118/195 [00:12<00:04, 16.78it/s] [2024-06-02 04:04:20] INFO huggingface_loader.py:166: [Quantized] Parameter: "transformer.h.16.mlp.gate_up_proj.q_scale", shape: (16384, 96), dtype: float32 61%|██████ | 118/195 [00:12<00:04, 16.78it/s] [2024-06-02 04:04:20] INFO huggingface_loader.py:174: [Not quantized] Parameter: "transformer.h.16.post_attention_layernorm.weight", shape: (3072,), dtype: float32 61%|██████ | 118/195 [00:12<00:04, 16.78it/s] 62%|██████▏ | 120/195 [00:12<00:04, 15.44it/s] [2024-06-02 04:04:20] INFO huggingface_loader.py:166: [Quantized] Parameter: "transformer.h.16.mixer.out_proj.q_weight", shape: (3072, 384), dtype: uint32 62%|██████▏ | 120/195 [00:12<00:04, 15.44it/s] [2024-06-02 04:04:20] INFO huggingface_loader.py:166: [Quantized] Parameter: "transformer.h.16.mixer.out_proj.q_scale", shape: (3072, 96), dtype: float32 62%|██████▏ | 120/195 [00:12<00:04, 15.44it/s] [2024-06-02 04:04:20] INFO huggingface_loader.py:166: [Quantized] Parameter: "transformer.h.16.mixer.qkv_proj.q_weight", shape: (9216, 384), dtype: uint32 62%|██████▏ | 120/195 [00:12<00:04, 15.44it/s] [2024-06-02 04:04:20] INFO huggingface_loader.py:166: [Quantized] Parameter: "transformer.h.16.mixer.qkv_proj.q_scale", shape: (9216, 96), dtype: float32 62%|██████▏ | 120/195 [00:12<00:04, 15.44it/s] 63%|██████▎ | 122/195 [00:12<00:04, 15.41it/s] [2024-06-02 04:04:20] INFO huggingface_loader.py:174: [Not quantized] Parameter: "transformer.h.17.ln.weight", shape: (3072,), dtype: float32 63%|██████▎ | 122/195 [00:12<00:04, 15.41it/s] [2024-06-02 04:04:20] INFO huggingface_loader.py:166: [Quantized] Parameter: "transformer.h.17.mlp.down_proj.q_weight", shape: (3072, 1024), dtype: uint32 63%|██████▎ | 122/195 [00:12<00:04, 15.41it/s] [2024-06-02 04:04:20] INFO huggingface_loader.py:166: [Quantized] Parameter: "transformer.h.17.mlp.down_proj.q_scale", shape: (3072, 256), dtype: float32 63%|██████▎ | 122/195 [00:12<00:04, 15.41it/s] [2024-06-02 04:04:20] INFO huggingface_loader.py:166: [Quantized] Parameter: "transformer.h.17.mlp.gate_up_proj.q_weight", shape: (16384, 384), dtype: uint32 63%|██████▎ | 122/195 [00:12<00:04, 15.41it/s] [2024-06-02 04:04:20] INFO huggingface_loader.py:166: [Quantized] Parameter: "transformer.h.17.mlp.gate_up_proj.q_scale", shape: (16384, 96), dtype: float32 63%|██████▎ | 122/195 [00:13<00:04, 15.41it/s] 64%|██████▍ | 125/195 [00:13<00:04, 15.86it/s] [2024-06-02 04:04:20] INFO huggingface_loader.py:174: [Not quantized] Parameter: "transformer.h.17.post_attention_layernorm.weight", shape: (3072,), dtype: float32 64%|██████▍ | 125/195 [00:13<00:04, 15.86it/s] [2024-06-02 04:04:20] INFO huggingface_loader.py:166: [Quantized] Parameter: "transformer.h.17.mixer.out_proj.q_weight", shape: (3072, 384), dtype: uint32 64%|██████▍ | 125/195 [00:13<00:04, 15.86it/s] [2024-06-02 04:04:21] INFO huggingface_loader.py:166: [Quantized] Parameter: "transformer.h.17.mixer.out_proj.q_scale", shape: (3072, 96), dtype: float32 64%|██████▍ | 125/195 [00:13<00:04, 15.86it/s] 65%|██████▌ | 127/195 [00:13<00:04, 15.06it/s] [2024-06-02 04:04:21] INFO huggingface_loader.py:166: [Quantized] Parameter: "transformer.h.17.mixer.qkv_proj.q_weight", shape: (9216, 384), dtype: uint32 65%|██████▌ | 127/195 [00:13<00:04, 15.06it/s] [2024-06-02 04:04:21] INFO huggingface_loader.py:166: [Quantized] Parameter: "transformer.h.17.mixer.qkv_proj.q_scale", shape: (9216, 96), dtype: float32 65%|██████▌ | 127/195 [00:13<00:04, 15.06it/s] [2024-06-02 04:04:21] INFO huggingface_loader.py:174: [Not quantized] Parameter: "transformer.h.18.ln.weight", shape: (3072,), dtype: float32 65%|██████▌ | 127/195 [00:13<00:04, 15.06it/s] [2024-06-02 04:04:21] INFO huggingface_loader.py:166: [Quantized] Parameter: "transformer.h.18.mlp.down_proj.q_weight", shape: (3072, 1024), dtype: uint32 65%|██████▌ | 127/195 [00:13<00:04, 15.06it/s] [2024-06-02 04:04:21] INFO huggingface_loader.py:166: [Quantized] Parameter: "transformer.h.18.mlp.down_proj.q_scale", shape: (3072, 256), dtype: float32 65%|██████▌ | 127/195 [00:13<00:04, 15.06it/s] 67%|██████▋ | 130/195 [00:13<00:03, 16.51it/s] [2024-06-02 04:04:21] INFO huggingface_loader.py:166: [Quantized] Parameter: "transformer.h.18.mlp.gate_up_proj.q_weight", shape: (16384, 384), dtype: uint32 67%|██████▋ | 130/195 [00:13<00:03, 16.51it/s] [2024-06-02 04:04:21] INFO huggingface_loader.py:166: [Quantized] Parameter: "transformer.h.18.mlp.gate_up_proj.q_scale", shape: (16384, 96), dtype: float32 67%|██████▋ | 130/195 [00:13<00:03, 16.51it/s] [2024-06-02 04:04:21] INFO huggingface_loader.py:174: [Not quantized] Parameter: "transformer.h.18.post_attention_layernorm.weight", shape: (3072,), dtype: float32 67%|██████▋ | 130/195 [00:13<00:03, 16.51it/s] 68%|██████▊ | 132/195 [00:13<00:04, 15.40it/s] [2024-06-02 04:04:21] INFO huggingface_loader.py:166: [Quantized] Parameter: "transformer.h.18.mixer.out_proj.q_weight", shape: (3072, 384), dtype: uint32 68%|██████▊ | 132/195 [00:13<00:04, 15.40it/s] [2024-06-02 04:04:21] INFO huggingface_loader.py:166: [Quantized] Parameter: "transformer.h.18.mixer.out_proj.q_scale", shape: (3072, 96), dtype: float32 68%|██████▊ | 132/195 [00:13<00:04, 15.40it/s] [2024-06-02 04:04:21] INFO huggingface_loader.py:166: [Quantized] Parameter: "transformer.h.18.mixer.qkv_proj.q_weight", shape: (9216, 384), dtype: uint32 68%|██████▊ | 132/195 [00:13<00:04, 15.40it/s] [2024-06-02 04:04:21] INFO huggingface_loader.py:166: [Quantized] Parameter: "transformer.h.18.mixer.qkv_proj.q_scale", shape: (9216, 96), dtype: float32 68%|██████▊ | 132/195 [00:13<00:04, 15.40it/s] 69%|██████▊ | 134/195 [00:13<00:03, 15.38it/s] [2024-06-02 04:04:21] INFO huggingface_loader.py:174: [Not quantized] Parameter: "transformer.h.19.ln.weight", shape: (3072,), dtype: float32 69%|██████▊ | 134/195 [00:13<00:03, 15.38it/s] [2024-06-02 04:04:21] INFO huggingface_loader.py:166: [Quantized] Parameter: "transformer.h.19.mlp.down_proj.q_weight", shape: (3072, 1024), dtype: uint32 69%|██████▊ | 134/195 [00:13<00:03, 15.38it/s] [2024-06-02 04:04:21] INFO huggingface_loader.py:166: [Quantized] Parameter: "transformer.h.19.mlp.down_proj.q_scale", shape: (3072, 256), dtype: float32 69%|██████▊ | 134/195 [00:13<00:03, 15.38it/s] [2024-06-02 04:04:21] INFO huggingface_loader.py:166: [Quantized] Parameter: "transformer.h.19.mlp.gate_up_proj.q_weight", shape: (16384, 384), dtype: uint32 69%|██████▊ | 134/195 [00:13<00:03, 15.38it/s] [2024-06-02 04:04:21] INFO huggingface_loader.py:166: [Quantized] Parameter: "transformer.h.19.mlp.gate_up_proj.q_scale", shape: (16384, 96), dtype: float32 69%|██████▊ | 134/195 [00:13<00:03, 15.38it/s] 70%|███████ | 137/195 [00:13<00:03, 15.88it/s] [2024-06-02 04:04:21] INFO huggingface_loader.py:174: [Not quantized] Parameter: "transformer.h.19.post_attention_layernorm.weight", shape: (3072,), dtype: float32 70%|███████ | 137/195 [00:13<00:03, 15.88it/s] [2024-06-02 04:04:21] INFO huggingface_loader.py:166: [Quantized] Parameter: "transformer.h.19.mixer.out_proj.q_weight", shape: (3072, 384), dtype: uint32 70%|███████ | 137/195 [00:13<00:03, 15.88it/s] [2024-06-02 04:04:21] INFO huggingface_loader.py:166: [Quantized] Parameter: "transformer.h.19.mixer.out_proj.q_scale", shape: (3072, 96), dtype: float32 70%|███████ | 137/195 [00:13<00:03, 15.88it/s] 71%|███████▏ | 139/195 [00:13<00:03, 15.14it/s] [2024-06-02 04:04:21] INFO huggingface_loader.py:166: [Quantized] Parameter: "transformer.h.19.mixer.qkv_proj.q_weight", shape: (9216, 384), dtype: uint32 71%|███████▏ | 139/195 [00:14<00:03, 15.14it/s] [2024-06-02 04:04:21] INFO huggingface_loader.py:166: [Quantized] Parameter: "transformer.h.19.mixer.qkv_proj.q_scale", shape: (9216, 96), dtype: float32 71%|███████▏ | 139/195 [00:14<00:03, 15.14it/s] [2024-06-02 04:04:21] INFO huggingface_loader.py:174: [Not quantized] Parameter: "transformer.h.2.ln.weight", shape: (3072,), dtype: float32 71%|███████▏ | 139/195 [00:14<00:03, 15.14it/s] [2024-06-02 04:04:21] INFO huggingface_loader.py:166: [Quantized] Parameter: "transformer.h.2.mlp.down_proj.q_weight", shape: (3072, 1024), dtype: uint32 71%|███████▏ | 139/195 [00:14<00:03, 15.14it/s] [2024-06-02 04:04:21] INFO huggingface_loader.py:166: [Quantized] Parameter: "transformer.h.2.mlp.down_proj.q_scale", shape: (3072, 256), dtype: float32 71%|███████▏ | 139/195 [00:14<00:03, 15.14it/s] 73%|███████▎ | 142/195 [00:14<00:03, 16.71it/s] [2024-06-02 04:04:22] INFO huggingface_loader.py:166: [Quantized] Parameter: "transformer.h.2.mlp.gate_up_proj.q_weight", shape: (16384, 384), dtype: uint32 73%|███████▎ | 142/195 [00:14<00:03, 16.71it/s] [2024-06-02 04:04:22] INFO huggingface_loader.py:166: [Quantized] Parameter: "transformer.h.2.mlp.gate_up_proj.q_scale", shape: (16384, 96), dtype: float32 73%|███████▎ | 142/195 [00:14<00:03, 16.71it/s] [2024-06-02 04:04:22] INFO huggingface_loader.py:174: [Not quantized] Parameter: "transformer.h.2.post_attention_layernorm.weight", shape: (3072,), dtype: float32 73%|███████▎ | 142/195 [00:14<00:03, 16.71it/s] 74%|███████▍ | 144/195 [00:14<00:03, 15.59it/s] [2024-06-02 04:04:22] INFO huggingface_loader.py:166: [Quantized] Parameter: "transformer.h.2.mixer.out_proj.q_weight", shape: (3072, 384), dtype: uint32 74%|███████▍ | 144/195 [00:14<00:03, 15.59it/s] [2024-06-02 04:04:22] INFO huggingface_loader.py:166: [Quantized] Parameter: "transformer.h.2.mixer.out_proj.q_scale", shape: (3072, 96), dtype: float32 74%|███████▍ | 144/195 [00:14<00:03, 15.59it/s] [2024-06-02 04:04:22] INFO huggingface_loader.py:166: [Quantized] Parameter: "transformer.h.2.mixer.qkv_proj.q_weight", shape: (9216, 384), dtype: uint32 74%|███████▍ | 144/195 [00:14<00:03, 15.59it/s] [2024-06-02 04:04:22] INFO huggingface_loader.py:166: [Quantized] Parameter: "transformer.h.2.mixer.qkv_proj.q_scale", shape: (9216, 96), dtype: float32 74%|███████▍ | 144/195 [00:14<00:03, 15.59it/s] 75%|███████▍ | 146/195 [00:14<00:03, 15.45it/s] [2024-06-02 04:04:22] INFO huggingface_loader.py:174: [Not quantized] Parameter: "transformer.h.20.ln.weight", shape: (3072,), dtype: float32 75%|███████▍ | 146/195 [00:14<00:03, 15.45it/s] [2024-06-02 04:04:22] INFO huggingface_loader.py:166: [Quantized] Parameter: "transformer.h.20.mlp.down_proj.q_weight", shape: (3072, 1024), dtype: uint32 75%|███████▍ | 146/195 [00:14<00:03, 15.45it/s] [2024-06-02 04:04:22] INFO huggingface_loader.py:166: [Quantized] Parameter: "transformer.h.20.mlp.down_proj.q_scale", shape: (3072, 256), dtype: float32 75%|███████▍ | 146/195 [00:14<00:03, 15.45it/s] [2024-06-02 04:04:22] INFO huggingface_loader.py:166: [Quantized] Parameter: "transformer.h.20.mlp.gate_up_proj.q_weight", shape: (16384, 384), dtype: uint32 75%|███████▍ | 146/195 [00:14<00:03, 15.45it/s] [2024-06-02 04:04:22] INFO huggingface_loader.py:166: [Quantized] Parameter: "transformer.h.20.mlp.gate_up_proj.q_scale", shape: (16384, 96), dtype: float32 75%|███████▍ | 146/195 [00:14<00:03, 15.45it/s] 76%|███████▋ | 149/195 [00:14<00:02, 15.91it/s] [2024-06-02 04:04:22] INFO huggingface_loader.py:174: [Not quantized] Parameter: "transformer.h.20.post_attention_layernorm.weight", shape: (3072,), dtype: float32 76%|███████▋ | 149/195 [00:14<00:02, 15.91it/s] [2024-06-02 04:04:22] INFO huggingface_loader.py:166: [Quantized] Parameter: "transformer.h.20.mixer.out_proj.q_weight", shape: (3072, 384), dtype: uint32 76%|███████▋ | 149/195 [00:14<00:02, 15.91it/s] [2024-06-02 04:04:22] INFO huggingface_loader.py:166: [Quantized] Parameter: "transformer.h.20.mixer.out_proj.q_scale", shape: (3072, 96), dtype: float32 76%|███████▋ | 149/195 [00:14<00:02, 15.91it/s] 77%|███████▋ | 151/195 [00:14<00:02, 15.15it/s] [2024-06-02 04:04:22] INFO huggingface_loader.py:166: [Quantized] Parameter: "transformer.h.20.mixer.qkv_proj.q_weight", shape: (9216, 384), dtype: uint32 77%|███████▋ | 151/195 [00:14<00:02, 15.15it/s] [2024-06-02 04:04:22] INFO huggingface_loader.py:166: [Quantized] Parameter: "transformer.h.20.mixer.qkv_proj.q_scale", shape: (9216, 96), dtype: float32 77%|███████▋ | 151/195 [00:14<00:02, 15.15it/s] [2024-06-02 04:04:22] INFO huggingface_loader.py:166: [Quantized] Parameter: "transformer.h.21.mixer.out_proj.q_weight", shape: (3072, 384), dtype: uint32 77%|███████▋ | 151/195 [00:14<00:02, 15.15it/s] [2024-06-02 04:04:22] INFO huggingface_loader.py:166: [Quantized] Parameter: "transformer.h.21.mixer.out_proj.q_scale", shape: (3072, 96), dtype: float32 77%|███████▋ | 151/195 [00:14<00:02, 15.15it/s] [2024-06-02 04:04:22] INFO huggingface_loader.py:174: [Not quantized] Parameter: "transformer.h.3.ln.weight", shape: (3072,), dtype: float32 77%|███████▋ | 151/195 [00:14<00:02, 15.15it/s] [2024-06-02 04:04:22] INFO huggingface_loader.py:166: [Quantized] Parameter: "transformer.h.3.mlp.down_proj.q_weight", shape: (3072, 1024), dtype: uint32 77%|███████▋ | 151/195 [00:14<00:02, 15.15it/s] [2024-06-02 04:04:22] INFO huggingface_loader.py:166: [Quantized] Parameter: "transformer.h.3.mlp.down_proj.q_scale", shape: (3072, 256), dtype: float32 77%|███████▋ | 151/195 [00:14<00:02, 15.15it/s] 79%|███████▉ | 155/195 [00:14<00:02, 17.25it/s] [2024-06-02 04:04:22] INFO huggingface_loader.py:166: [Quantized] Parameter: "transformer.h.3.mlp.gate_up_proj.q_weight", shape: (16384, 384), dtype: uint32 79%|███████▉ | 155/195 [00:14<00:02, 17.25it/s] [2024-06-02 04:04:22] INFO huggingface_loader.py:166: [Quantized] Parameter: "transformer.h.3.mlp.gate_up_proj.q_scale", shape: (16384, 96), dtype: float32 79%|███████▉ | 155/195 [00:15<00:02, 17.25it/s] [2024-06-02 04:04:22] INFO huggingface_loader.py:174: [Not quantized] Parameter: "transformer.h.3.post_attention_layernorm.weight", shape: (3072,), dtype: float32 79%|███████▉ | 155/195 [00:15<00:02, 17.25it/s] 81%|████████ | 157/195 [00:15<00:02, 16.07it/s] [2024-06-02 04:04:22] INFO huggingface_loader.py:166: [Quantized] Parameter: "transformer.h.3.mixer.out_proj.q_weight", shape: (3072, 384), dtype: uint32 81%|████████ | 157/195 [00:15<00:02, 16.07it/s] [2024-06-02 04:04:22] INFO huggingface_loader.py:166: [Quantized] Parameter: "transformer.h.3.mixer.out_proj.q_scale", shape: (3072, 96), dtype: float32 81%|████████ | 157/195 [00:15<00:02, 16.07it/s] [2024-06-02 04:04:22] INFO huggingface_loader.py:166: [Quantized] Parameter: "transformer.h.3.mixer.qkv_proj.q_weight", shape: (9216, 384), dtype: uint32 81%|████████ | 157/195 [00:15<00:02, 16.07it/s] [2024-06-02 04:04:23] INFO huggingface_loader.py:166: [Quantized] Parameter: "transformer.h.3.mixer.qkv_proj.q_scale", shape: (9216, 96), dtype: float32 81%|████████ | 157/195 [00:15<00:02, 16.07it/s] 82%|████████▏ | 159/195 [00:15<00:02, 15.90it/s] [2024-06-02 04:04:23] INFO huggingface_loader.py:174: [Not quantized] Parameter: "transformer.h.4.ln.weight", shape: (3072,), dtype: float32 82%|████████▏ | 159/195 [00:15<00:02, 15.90it/s] [2024-06-02 04:04:23] INFO huggingface_loader.py:166: [Quantized] Parameter: "transformer.h.4.mlp.down_proj.q_weight", shape: (3072, 1024), dtype: uint32 82%|████████▏ | 159/195 [00:15<00:02, 15.90it/s] [2024-06-02 04:04:23] INFO huggingface_loader.py:166: [Quantized] Parameter: "transformer.h.4.mlp.down_proj.q_scale", shape: (3072, 256), dtype: float32 82%|████████▏ | 159/195 [00:15<00:02, 15.90it/s] [2024-06-02 04:04:23] INFO huggingface_loader.py:166: [Quantized] Parameter: "transformer.h.4.mlp.gate_up_proj.q_weight", shape: (16384, 384), dtype: uint32 82%|████████▏ | 159/195 [00:15<00:02, 15.90it/s] [2024-06-02 04:04:23] INFO huggingface_loader.py:166: [Quantized] Parameter: "transformer.h.4.mlp.gate_up_proj.q_scale", shape: (16384, 96), dtype: float32 82%|████████▏ | 159/195 [00:15<00:02, 15.90it/s] 83%|████████▎ | 162/195 [00:15<00:02, 15.89it/s] [2024-06-02 04:04:23] INFO huggingface_loader.py:174: [Not quantized] Parameter: "transformer.h.4.post_attention_layernorm.weight", shape: (3072,), dtype: float32 83%|████████▎ | 162/195 [00:15<00:02, 15.89it/s] [2024-06-02 04:04:23] INFO huggingface_loader.py:166: [Quantized] Parameter: "transformer.h.4.mixer.out_proj.q_weight", shape: (3072, 384), dtype: uint32 83%|████████▎ | 162/195 [00:15<00:02, 15.89it/s] [2024-06-02 04:04:23] INFO huggingface_loader.py:166: [Quantized] Parameter: "transformer.h.4.mixer.out_proj.q_scale", shape: (3072, 96), dtype: float32 83%|████████▎ | 162/195 [00:15<00:02, 15.89it/s] 84%|████████▍ | 164/195 [00:15<00:02, 15.16it/s] [2024-06-02 04:04:23] INFO huggingface_loader.py:166: [Quantized] Parameter: "transformer.h.4.mixer.qkv_proj.q_weight", shape: (9216, 384), dtype: uint32 84%|████████▍ | 164/195 [00:15<00:02, 15.16it/s] [2024-06-02 04:04:23] INFO huggingface_loader.py:166: [Quantized] Parameter: "transformer.h.4.mixer.qkv_proj.q_scale", shape: (9216, 96), dtype: float32 84%|████████▍ | 164/195 [00:15<00:02, 15.16it/s] [2024-06-02 04:04:23] INFO huggingface_loader.py:174: [Not quantized] Parameter: "transformer.h.5.ln.weight", shape: (3072,), dtype: float32 84%|████████▍ | 164/195 [00:15<00:02, 15.16it/s] [2024-06-02 04:04:23] INFO huggingface_loader.py:166: [Quantized] Parameter: "transformer.h.5.mlp.down_proj.q_weight", shape: (3072, 1024), dtype: uint32 84%|████████▍ | 164/195 [00:15<00:02, 15.16it/s] [2024-06-02 04:04:23] INFO huggingface_loader.py:166: [Quantized] Parameter: "transformer.h.5.mlp.down_proj.q_scale", shape: (3072, 256), dtype: float32 84%|████████▍ | 164/195 [00:15<00:02, 15.16it/s] 86%|████████▌ | 167/195 [00:15<00:01, 16.65it/s] [2024-06-02 04:04:23] INFO huggingface_loader.py:166: [Quantized] Parameter: "transformer.h.5.mlp.gate_up_proj.q_weight", shape: (16384, 384), dtype: uint32 86%|████████▌ | 167/195 [00:15<00:01, 16.65it/s] [2024-06-02 04:04:23] INFO huggingface_loader.py:166: [Quantized] Parameter: "transformer.h.5.mlp.gate_up_proj.q_scale", shape: (16384, 96), dtype: float32 86%|████████▌ | 167/195 [00:15<00:01, 16.65it/s] [2024-06-02 04:04:23] INFO huggingface_loader.py:174: [Not quantized] Parameter: "transformer.h.5.post_attention_layernorm.weight", shape: (3072,), dtype: float32 86%|████████▌ | 167/195 [00:15<00:01, 16.65it/s] 87%|████████▋ | 169/195 [00:15<00:01, 15.58it/s] [2024-06-02 04:04:23] INFO huggingface_loader.py:166: [Quantized] Parameter: "transformer.h.5.mixer.out_proj.q_weight", shape: (3072, 384), dtype: uint32 87%|████████▋ | 169/195 [00:15<00:01, 15.58it/s] [2024-06-02 04:04:23] INFO huggingface_loader.py:166: [Quantized] Parameter: "transformer.h.5.mixer.out_proj.q_scale", shape: (3072, 96), dtype: float32 87%|████████▋ | 169/195 [00:15<00:01, 15.58it/s] [2024-06-02 04:04:23] INFO huggingface_loader.py:166: [Quantized] Parameter: "transformer.h.5.mixer.qkv_proj.q_weight", shape: (9216, 384), dtype: uint32 87%|████████▋ | 169/195 [00:15<00:01, 15.58it/s] [2024-06-02 04:04:23] INFO huggingface_loader.py:166: [Quantized] Parameter: "transformer.h.5.mixer.qkv_proj.q_scale", shape: (9216, 96), dtype: float32 87%|████████▋ | 169/195 [00:15<00:01, 15.58it/s] 88%|████████▊ | 171/195 [00:15<00:01, 15.46it/s] [2024-06-02 04:04:23] INFO huggingface_loader.py:174: [Not quantized] Parameter: "transformer.h.6.ln.weight", shape: (3072,), dtype: float32 88%|████████▊ | 171/195 [00:15<00:01, 15.46it/s] [2024-06-02 04:04:23] INFO huggingface_loader.py:166: [Quantized] Parameter: "transformer.h.6.mlp.down_proj.q_weight", shape: (3072, 1024), dtype: uint32 88%|████████▊ | 171/195 [00:16<00:01, 15.46it/s] [2024-06-02 04:04:23] INFO huggingface_loader.py:166: [Quantized] Parameter: "transformer.h.6.mlp.down_proj.q_scale", shape: (3072, 256), dtype: float32 88%|████████▊ | 171/195 [00:16<00:01, 15.46it/s] [2024-06-02 04:04:23] INFO huggingface_loader.py:166: [Quantized] Parameter: "transformer.h.6.mlp.gate_up_proj.q_weight", shape: (16384, 384), dtype: uint32 88%|████████▊ | 171/195 [00:16<00:01, 15.46it/s] [2024-06-02 04:04:24] INFO huggingface_loader.py:166: [Quantized] Parameter: "transformer.h.6.mlp.gate_up_proj.q_scale", shape: (16384, 96), dtype: float32 88%|████████▊ | 171/195 [00:16<00:01, 15.46it/s] 89%|████████▉ | 174/195 [00:16<00:01, 15.88it/s] [2024-06-02 04:04:24] INFO huggingface_loader.py:174: [Not quantized] Parameter: "transformer.h.6.post_attention_layernorm.weight", shape: (3072,), dtype: float32 89%|████████▉ | 174/195 [00:16<00:01, 15.88it/s] [2024-06-02 04:04:24] INFO huggingface_loader.py:166: [Quantized] Parameter: "transformer.h.6.mixer.out_proj.q_weight", shape: (3072, 384), dtype: uint32 89%|████████▉ | 174/195 [00:16<00:01, 15.88it/s] [2024-06-02 04:04:24] INFO huggingface_loader.py:166: [Quantized] Parameter: "transformer.h.6.mixer.out_proj.q_scale", shape: (3072, 96), dtype: float32 89%|████████▉ | 174/195 [00:16<00:01, 15.88it/s] 90%|█████████ | 176/195 [00:16<00:01, 15.16it/s] [2024-06-02 04:04:24] INFO huggingface_loader.py:166: [Quantized] Parameter: "transformer.h.6.mixer.qkv_proj.q_weight", shape: (9216, 384), dtype: uint32 90%|█████████ | 176/195 [00:16<00:01, 15.16it/s] [2024-06-02 04:04:24] INFO huggingface_loader.py:166: [Quantized] Parameter: "transformer.h.6.mixer.qkv_proj.q_scale", shape: (9216, 96), dtype: float32 90%|█████████ | 176/195 [00:16<00:01, 15.16it/s] [2024-06-02 04:04:24] INFO huggingface_loader.py:174: [Not quantized] Parameter: "transformer.h.7.ln.weight", shape: (3072,), dtype: float32 90%|█████████ | 176/195 [00:16<00:01, 15.16it/s] [2024-06-02 04:04:24] INFO huggingface_loader.py:166: [Quantized] Parameter: "transformer.h.7.mlp.down_proj.q_weight", shape: (3072, 1024), dtype: uint32 90%|█████████ | 176/195 [00:16<00:01, 15.16it/s] [2024-06-02 04:04:24] INFO huggingface_loader.py:166: [Quantized] Parameter: "transformer.h.7.mlp.down_proj.q_scale", shape: (3072, 256), dtype: float32 90%|█████████ | 176/195 [00:16<00:01, 15.16it/s] 92%|█████████▏| 179/195 [00:16<00:00, 16.70it/s] [2024-06-02 04:04:24] INFO huggingface_loader.py:166: [Quantized] Parameter: "transformer.h.7.mlp.gate_up_proj.q_weight", shape: (16384, 384), dtype: uint32 92%|█████████▏| 179/195 [00:16<00:00, 16.70it/s] [2024-06-02 04:04:24] INFO huggingface_loader.py:166: [Quantized] Parameter: "transformer.h.7.mlp.gate_up_proj.q_scale", shape: (16384, 96), dtype: float32 92%|█████████▏| 179/195 [00:16<00:00, 16.70it/s] [2024-06-02 04:04:24] INFO huggingface_loader.py:174: [Not quantized] Parameter: "transformer.h.7.post_attention_layernorm.weight", shape: (3072,), dtype: float32 92%|█████████▏| 179/195 [00:16<00:00, 16.70it/s] 93%|█████████▎| 181/195 [00:16<00:00, 15.63it/s] [2024-06-02 04:04:24] INFO huggingface_loader.py:166: [Quantized] Parameter: "transformer.h.7.mixer.out_proj.q_weight", shape: (3072, 384), dtype: uint32 93%|█████████▎| 181/195 [00:16<00:00, 15.63it/s] [2024-06-02 04:04:24] INFO huggingface_loader.py:166: [Quantized] Parameter: "transformer.h.7.mixer.out_proj.q_scale", shape: (3072, 96), dtype: float32 93%|█████████▎| 181/195 [00:16<00:00, 15.63it/s] [2024-06-02 04:04:24] INFO huggingface_loader.py:166: [Quantized] Parameter: "transformer.h.7.mixer.qkv_proj.q_weight", shape: (9216, 384), dtype: uint32 93%|█████████▎| 181/195 [00:16<00:00, 15.63it/s] [2024-06-02 04:04:24] INFO huggingface_loader.py:166: [Quantized] Parameter: "transformer.h.7.mixer.qkv_proj.q_scale", shape: (9216, 96), dtype: float32 93%|█████████▎| 181/195 [00:16<00:00, 15.63it/s] 94%|█████████▍| 183/195 [00:16<00:00, 15.48it/s] [2024-06-02 04:04:24] INFO huggingface_loader.py:174: [Not quantized] Parameter: "transformer.h.8.ln.weight", shape: (3072,), dtype: float32 94%|█████████▍| 183/195 [00:16<00:00, 15.48it/s] [2024-06-02 04:04:24] INFO huggingface_loader.py:166: [Quantized] Parameter: "transformer.h.8.mlp.down_proj.q_weight", shape: (3072, 1024), dtype: uint32 94%|█████████▍| 183/195 [00:16<00:00, 15.48it/s] [2024-06-02 04:04:24] INFO huggingface_loader.py:166: [Quantized] Parameter: "transformer.h.8.mlp.down_proj.q_scale", shape: (3072, 256), dtype: float32 94%|█████████▍| 183/195 [00:16<00:00, 15.48it/s] [2024-06-02 04:04:24] INFO huggingface_loader.py:166: [Quantized] Parameter: "transformer.h.8.mlp.gate_up_proj.q_weight", shape: (16384, 384), dtype: uint32 94%|█████████▍| 183/195 [00:16<00:00, 15.48it/s] [2024-06-02 04:04:24] INFO huggingface_loader.py:166: [Quantized] Parameter: "transformer.h.8.mlp.gate_up_proj.q_scale", shape: (16384, 96), dtype: float32 94%|█████████▍| 183/195 [00:16<00:00, 15.48it/s] 95%|█████████▌| 186/195 [00:16<00:00, 15.91it/s] [2024-06-02 04:04:24] INFO huggingface_loader.py:174: [Not quantized] Parameter: "transformer.h.8.post_attention_layernorm.weight", shape: (3072,), dtype: float32 95%|█████████▌| 186/195 [00:16<00:00, 15.91it/s] [2024-06-02 04:04:24] INFO huggingface_loader.py:166: [Quantized] Parameter: "transformer.h.8.mixer.out_proj.q_weight", shape: (3072, 384), dtype: uint32 95%|█████████▌| 186/195 [00:16<00:00, 15.91it/s] [2024-06-02 04:04:24] INFO huggingface_loader.py:166: [Quantized] Parameter: "transformer.h.8.mixer.out_proj.q_scale", shape: (3072, 96), dtype: float32 95%|█████████▌| 186/195 [00:17<00:00, 15.91it/s] 96%|█████████▋| 188/195 [00:17<00:00, 15.15it/s] [2024-06-02 04:04:24] INFO huggingface_loader.py:166: [Quantized] Parameter: "transformer.h.8.mixer.qkv_proj.q_weight", shape: (9216, 384), dtype: uint32 96%|█████████▋| 188/195 [00:17<00:00, 15.15it/s] [2024-06-02 04:04:24] INFO huggingface_loader.py:166: [Quantized] Parameter: "transformer.h.8.mixer.qkv_proj.q_scale", shape: (9216, 96), dtype: float32 96%|█████████▋| 188/195 [00:17<00:00, 15.15it/s] [2024-06-02 04:04:24] INFO huggingface_loader.py:174: [Not quantized] Parameter: "transformer.h.9.ln.weight", shape: (3072,), dtype: float32 96%|█████████▋| 188/195 [00:17<00:00, 15.15it/s] [2024-06-02 04:04:24] INFO huggingface_loader.py:166: [Quantized] Parameter: "transformer.h.9.mlp.down_proj.q_weight", shape: (3072, 1024), dtype: uint32 96%|█████████▋| 188/195 [00:17<00:00, 15.15it/s] [2024-06-02 04:04:25] INFO huggingface_loader.py:166: [Quantized] Parameter: "transformer.h.9.mlp.down_proj.q_scale", shape: (3072, 256), dtype: float32 96%|█████████▋| 188/195 [00:17<00:00, 15.15it/s] 98%|█████████▊| 191/195 [00:17<00:00, 16.72it/s] [2024-06-02 04:04:25] INFO huggingface_loader.py:166: [Quantized] Parameter: "transformer.h.9.mlp.gate_up_proj.q_weight", shape: (16384, 384), dtype: uint32 98%|█████████▊| 191/195 [00:17<00:00, 16.72it/s] [2024-06-02 04:04:25] INFO huggingface_loader.py:166: [Quantized] Parameter: "transformer.h.9.mlp.gate_up_proj.q_scale", shape: (16384, 96), dtype: float32 98%|█████████▊| 191/195 [00:17<00:00, 16.72it/s] [2024-06-02 04:04:25] INFO huggingface_loader.py:174: [Not quantized] Parameter: "transformer.h.9.post_attention_layernorm.weight", shape: (3072,), dtype: float32 98%|█████████▊| 191/195 [00:17<00:00, 16.72it/s] 99%|█████████▉| 193/195 [00:17<00:00, 15.62it/s] [2024-06-02 04:04:25] INFO huggingface_loader.py:166: [Quantized] Parameter: "transformer.h.9.mixer.out_proj.q_weight", shape: (3072, 384), dtype: uint32 99%|█████████▉| 193/195 [00:17<00:00, 15.62it/s] [2024-06-02 04:04:25] INFO huggingface_loader.py:166: [Quantized] Parameter: "transformer.h.9.mixer.out_proj.q_scale", shape: (3072, 96), dtype: float32 99%|█████████▉| 193/195 [00:17<00:00, 15.62it/s] [2024-06-02 04:04:25] INFO huggingface_loader.py:166: [Quantized] Parameter: "transformer.h.9.mixer.qkv_proj.q_weight", shape: (9216, 384), dtype: uint32 99%|█████████▉| 193/195 [00:17<00:00, 15.62it/s] [2024-06-02 04:04:25] INFO huggingface_loader.py:166: [Quantized] Parameter: "transformer.h.9.mixer.qkv_proj.q_scale", shape: (9216, 96), dtype: float32 99%|█████████▉| 193/195 [00:17<00:00, 15.62it/s] 100%|██████████| 195/195 [00:17<00:00, 15.48it/s] 100%|██████████| 195/195 [00:17<00:00, 11.13it/s] [2024-06-02 04:04:25] INFO huggingface_loader.py:196: Unloading HF weight file: /ssd1/rickzhou/models/Phi-3-mini-128k-instruct/model-00001-of-00002.safetensors [2024-06-02 04:04:25] INFO stats.py:76: Time usage: HF loading: 2.646 sec; Pre-quantization mapping: 1.733 sec; Quantization: 2.463 sec [2024-06-02 04:04:25] INFO stats.py:90: RAM usage: Peak RAM: 9.262 GB. Total bytes loaded from disk: 14.235 GB [2024-06-02 04:04:25] INFO convert_weight.py:155: Parameter size after quantization: 2.225 GB [2024-06-02 04:04:25] INFO convert_weight.py:160: Total parameters: 3,821,079,552 [2024-06-02 04:04:25] INFO convert_weight.py:161: Bits per parameter: 5.001 [2024-06-02 04:04:25] INFO convert_weight.py:166: Saved to directory: /ssd2/models/mlc-delivery/hf/mlc-ai/Phi-3-mini-128k-instruct-q4f32_1-MLC All finished, 83 total shards committed, record saved to /ssd2/models/mlc-delivery/hf/mlc-ai/Phi-3-mini-128k-instruct-q4f32_1-MLC/ndarray-cache.json Also saved a bf16 record to /ssd2/models/mlc-delivery/hf/mlc-ai/Phi-3-mini-128k-instruct-q4f32_1-MLC/ndarray-cache-b16.json /opt/conda/envs/py310/bin/python -m mlc_llm gen_config /models/Phi-3-mini-128k-instruct --quantization q4f32_1 --conv-template phi-3 --output /models/mlc-delivery/hf/mlc-ai/Phi-3-mini-128k-instruct-q4f32_1-MLC [2024-06-04 02:47:01] INFO auto_config.py:116: Found model configuration: /models/Phi-3-mini-128k-instruct/config.json [2024-06-04 02:47:01] INFO auto_config.py:154: Found model type: phi3. Use `--model-type` to override. [2024-06-04 02:47:01] INFO phi3_model.py:53: context_window_size not found in config.json. Falling back to max_position_embeddings (131072) [2024-06-04 02:47:01] INFO phi3_model.py:68: prefill_chunk_size defaults to 2048 [2024-06-04 02:47:01] INFO config.py:107: Overriding max_batch_size from 1 to 80 [2024-06-04 02:47:01] INFO gen_config.py:143: [generation_config.json] Setting bos_token_id: 1 [2024-06-04 02:47:01] INFO gen_config.py:143: [generation_config.json] Setting eos_token_id: [32000, 32001, 32007] [2024-06-04 02:47:01] INFO gen_config.py:143: [generation_config.json] Setting pad_token_id: 32000 [2024-06-04 02:47:01] INFO gen_config.py:155: Found tokenizer config: /models/Phi-3-mini-128k-instruct/tokenizer.model. Copying to /models/mlc-delivery/hf/mlc-ai/Phi-3-mini-128k-instruct-q4f32_1-MLC/tokenizer.model [2024-06-04 02:47:01] INFO gen_config.py:155: Found tokenizer config: /models/Phi-3-mini-128k-instruct/tokenizer.json. Copying to /models/mlc-delivery/hf/mlc-ai/Phi-3-mini-128k-instruct-q4f32_1-MLC/tokenizer.json [2024-06-04 02:47:01] INFO gen_config.py:157: Not found tokenizer config: /models/Phi-3-mini-128k-instruct/vocab.json [2024-06-04 02:47:01] INFO gen_config.py:157: Not found tokenizer config: /models/Phi-3-mini-128k-instruct/merges.txt [2024-06-04 02:47:01] INFO gen_config.py:155: Found tokenizer config: /models/Phi-3-mini-128k-instruct/added_tokens.json. Copying to /models/mlc-delivery/hf/mlc-ai/Phi-3-mini-128k-instruct-q4f32_1-MLC/added_tokens.json [2024-06-04 02:47:01] INFO gen_config.py:155: Found tokenizer config: /models/Phi-3-mini-128k-instruct/tokenizer_config.json. Copying to /models/mlc-delivery/hf/mlc-ai/Phi-3-mini-128k-instruct-q4f32_1-MLC/tokenizer_config.json [2024-06-04 02:47:01] INFO gen_config.py:216: Detected tokenizer info: {'token_postproc_method': 'byte_fallback', 'prepend_space_in_encode': True, 'strip_space_in_decode': True} [2024-06-04 02:47:01] INFO gen_config.py:32: [System default] Setting temperature: 1.0 [2024-06-04 02:47:01] INFO gen_config.py:32: [System default] Setting presence_penalty: 0.0 [2024-06-04 02:47:01] INFO gen_config.py:32: [System default] Setting frequency_penalty: 0.0 [2024-06-04 02:47:01] INFO gen_config.py:32: [System default] Setting repetition_penalty: 1.0 [2024-06-04 02:47:01] INFO gen_config.py:32: [System default] Setting top_p: 1.0 [2024-06-04 02:47:01] INFO gen_config.py:32: [System default] Setting mean_gen_len: 128 [2024-06-04 02:47:01] INFO gen_config.py:32: [System default] Setting max_gen_len: 512 [2024-06-04 02:47:01] INFO gen_config.py:32: [System default] Setting shift_fill_factor: 0.3 [2024-06-04 02:47:01] INFO gen_config.py:223: Dumping configuration file to: /models/mlc-delivery/hf/mlc-ai/Phi-3-mini-128k-instruct-q4f32_1-MLC/mlc-chat-config.json /opt/conda/envs/py310/bin/python -m mlc_llm convert_weight /models/Phi-3-mini-128k-instruct --quantization q4f32_1 --output /models/mlc-delivery/hf/mlc-ai/Phi-3-mini-128k-instruct-q4f32_1-MLC [2024-06-04 02:47:02] INFO auto_config.py:116: Found model configuration: /models/Phi-3-mini-128k-instruct/config.json [2024-06-04 02:47:04] INFO auto_device.py:79: Found device: cuda:0 [2024-06-04 02:47:05] INFO auto_device.py:88: Not found device: rocm:0 [2024-06-04 02:47:07] INFO auto_device.py:88: Not found device: metal:0 [2024-06-04 02:47:09] INFO auto_device.py:79: Found device: vulkan:0 [2024-06-04 02:47:09] INFO auto_device.py:79: Found device: vulkan:1 [2024-06-04 02:47:09] INFO auto_device.py:79: Found device: vulkan:2 [2024-06-04 02:47:09] INFO auto_device.py:79: Found device: vulkan:3 [2024-06-04 02:47:10] INFO auto_device.py:88: Not found device: opencl:0 [2024-06-04 02:47:10] INFO auto_device.py:35: Using device: cuda:0 [2024-06-04 02:47:10] INFO auto_weight.py:71: Finding weights in: /models/Phi-3-mini-128k-instruct [2024-06-04 02:47:10] INFO auto_weight.py:137: Not found Huggingface PyTorch [2024-06-04 02:47:10] INFO auto_weight.py:144: Found source weight format: huggingface-safetensor. Source configuration: /models/Phi-3-mini-128k-instruct/model.safetensors.index.json [2024-06-04 02:47:10] INFO auto_weight.py:107: Using source weight configuration: /models/Phi-3-mini-128k-instruct/model.safetensors.index.json. Use `--source` to override. [2024-06-04 02:47:10] INFO auto_weight.py:111: Using source weight format: huggingface-safetensor. Use `--source-format` to override. [2024-06-04 02:47:10] INFO auto_config.py:154: Found model type: phi3. Use `--model-type` to override. [2024-06-04 02:47:10] INFO phi3_model.py:53: context_window_size not found in config.json. Falling back to max_position_embeddings (131072) [2024-06-04 02:47:10] INFO phi3_model.py:68: prefill_chunk_size defaults to 2048 Weight conversion with arguments: --config /models/Phi-3-mini-128k-instruct/config.json --quantization GroupQuantize(name='q4f32_1', kind='group-quant', group_size=32, quantize_dtype='int4', storage_dtype='uint32', model_dtype='float32', linear_weight_layout='NK', quantize_embedding=True, quantize_final_fc=True, num_elem_per_storage=8, num_storage_per_group=4, max_int_value=7) --model-type phi3 --device cuda:0 --source /models/Phi-3-mini-128k-instruct/model.safetensors.index.json --source-format huggingface-safetensor --output /models/mlc-delivery/hf/mlc-ai/Phi-3-mini-128k-instruct-q4f32_1-MLC Start storing to cache /models/mlc-delivery/hf/mlc-ai/Phi-3-mini-128k-instruct-q4f32_1-MLC 0%| | 0/195 [00:00