Error running code on Silicon M1 MAX

#8
by tulas - opened

Any tips?

Loading checkpoint shards: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 7/7 [00:17<00:00, 2.49s/it]
Some parameters are on the meta device because they were offloaded to the disk.
Traceback (most recent call last):
File "/Users/tulas/Projects/molmo/main.py", line 31, in
output = model.generate_from_batch(
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/tulas/Projects/molmo/env/lib/python3.12/site-packages/torch/utils/_contextlib.py", line 116, in decorate_context
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/Users/tulas/.cache/huggingface/modules/transformers_modules/allenai/Molmo-7B-O-0924/1afe021df15873079d5c99ca1800d38acb23cd2d/modeling_molmo.py", line 2212, in generate_from_batch
out = super().generate(
^^^^^^^^^^^^^^^^^
File "/Users/tulas/Projects/molmo/env/lib/python3.12/site-packages/torch/utils/_contextlib.py", line 116, in decorate_context
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/Users/tulas/Projects/molmo/env/lib/python3.12/site-packages/transformers/generation/utils.py", line 2047, in generate
result = self._sample(
^^^^^^^^^^^^^
File "/Users/tulas/Projects/molmo/env/lib/python3.12/site-packages/transformers/generation/utils.py", line 3007, in _sample
outputs = self(**model_inputs, return_dict=True)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/tulas/Projects/molmo/env/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1553, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/tulas/Projects/molmo/env/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1562, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/tulas/Projects/molmo/env/lib/python3.12/site-packages/accelerate/hooks.py", line 170, in new_forward
output = module._old_forward(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/tulas/.cache/huggingface/modules/transformers_modules/allenai/Molmo-7B-O-0924/1afe021df15873079d5c99ca1800d38acb23cd2d/modeling_molmo.py", line 2106, in forward
outputs = self.model.forward(
^^^^^^^^^^^^^^^^^^^
File "/Users/tulas/.cache/huggingface/modules/transformers_modules/allenai/Molmo-7B-O-0924/1afe021df15873079d5c99ca1800d38acb23cd2d/modeling_molmo.py", line 1920, in forward
attention_bias = get_causal_attention_bias(self.__cache, past_length + seq_len, x.device)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/tulas/.cache/huggingface/modules/transformers_modules/allenai/Molmo-7B-O-0924/1afe021df15873079d5c99ca1800d38acb23cd2d/modeling_molmo.py", line 1565, in get_causal_attention_bias
with torch.autocast(device.type, enabled=False):
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/tulas/Projects/molmo/env/lib/python3.12/site-packages/torch/amp/autocast_mode.py", line 229, in init
dtype = torch.get_autocast_dtype(device_type)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
RuntimeError: unsupported scalarType

Hello @Troubadix I tried to replicate this and resolve, but it seems to be an issue with Torch, as other models throw the same error. This thread discusses bfloat16 support for operations. However, from my investigation, autocast only works with CPU and CUDA device types. I’ll look into this further and update you if I find anything. A few useful threads.. [1 2].

thank you @amanrangapur !

Sign up or log in to comment