Got an unexpected keyword argument 'model_name_or_path'
Using your example I got this error: TypeError: BaseQuantizeConfig.__init__() got an unexpected keyword argument 'model_name_or_path'
on the line:
model = AutoGPTQForCausalLM.from_quantized(model_name_or_path, ...)
auto-gptq==0.2.2
nvidia-smi:
Fri Jun 30 14:18:18 2023
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 515.105.01 Driver Version: 515.105.01 CUDA Version: 11.7 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|===============================+======================+======================|
| 0 NVIDIA GeForce ... Off | 00000000:01:00.0 Off | N/A |
| 0% 24C P8 8W / 250W | 289MiB / 11264MiB | 0% Default |
| | | N/A |
+-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=============================================================================|
+-----------------------------------------------------------------------------+
Does anybody know how to fix it?
What is your full script
from transformers import AutoTokenizer, pipeline, logging
from auto_gptq import AutoGPTQForCausalLM, BaseQuantizeConfig
use_triton = False
tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, use_fast=True)
model = AutoGPTQForCausalLM.from_quantized(model_name_or_path,
use_safetensors=True,
device="cuda:0",
use_triton=use_triton,
quantize_config=None)
You've only used a portion of the script and haven't defined model_name_or_path
. Check the full script in the README
You've only used a portion of the script and haven't defined
model_name_or_path
. Check the full script in the README
I defined model_name_or_path
but forgot past it to the snippet I sent.
I got a similar error using text-geration-webui load of this model.
The error message: BaseQuantizeConfig.init() got an unexpected keyword argument ‘model_name_or_path’
is produced when the method from_pretrained() returns the contents of the file quantize_config.json to BaseQuantizeConfig.
I was able to load the model by removing these 2 lines and the preceding comma from quantize_config.json:
"model_name_or_path": null,
"model_file_base_name": null
I suspected as much.
pip show auto-gptq
gives me: 0.2.0 dev0
But as I found the original post while searching for my error message and the one in this thread had the exact same message text (probably at the same place in the code), I thought there might be a connection.
OK, please update auto-gptq:
pip3 uninstall -y auto-gptq
git clone https://github.com/PanQiWei/AutoGPTQ
cd AutoGPTQ
pip3 install .