Out of resource: shared memory

#16
by iszhaoxin - opened

I got the error message as below:

'triton.runtime.autotuner.OutOfResources: out of resource: shared memory, Required: 135200, Hardware limit: 101376. Reducing block sizes or num_stages may help.'

I tried on both RTX A6000 and RTX 6000.
I guess maybe it is because the model is only trained and tested on specific types GPUs, such as A100?

Yes, in my experience as well this model works well only on the GPUs listed as 'tested' in the documentation.

Microsoft org

The recommended adjustment layer is

"target_modules": [
"o_proj",
"qkv_proj"
]

@LeeStott how to achieve

@LeeStott I'm running into this error as well. Can you show us how to adjust the layer?

If using PEFT you can set this using the "target_modules" parameter in LoraConfig
https://huggingface.co./docs/peft/package_reference/lora#peft.LoraConfig

@LeeStott ,
Using these target modules with this model give the following error -
ValueError: Target modules {'o_proj', 'qkv_proj'} not found in the base model. Please check the target modules and try again.

I tried other target modules such as "all-linear" and [q_proj,k_proj,v_proj,o_proj,gate_proj,up_proj,down_proj,lm_head] but both these options are giving the same error which is mentioned in the main question of this thread.

Sign up or log in to comment