runtime error

#2
by Mayareis - opened

runtime error
Exit code: 1. Reason: a revision.
A new version of the following files was downloaded from https://huggingface.co./ragavsachdeva/magiv2:

  • modelling_magiv2.py
  • utils.py
  • processing_magiv2.py
    . Make sure to double-check they do not contain any added malicious code. To avoid downloading new versions of the code file, you can pin a revision.
    /usr/local/lib/python3.10/site-packages/transformers/tokenization_utils_base.py:1601: FutureWarning: clean_up_tokenization_spaces was not set. It will be set to True by default. This behavior will be depracted in transformers v4.45, and will be then set to False by default. For more details check this issue: https://github.com/huggingface/transformers/issues/31884
    warnings.warn(
    Traceback (most recent call last):
    File "/home/user/app/app.py", line 10, in
    model = AutoModel.from_pretrained("ragavsachdeva/magiv2", trust_remote_code=True).cuda().eval()
    File "/usr/local/lib/python3.10/site-packages/transformers/models/auto/auto_factory.py", line 559, in from_pretrained
    return model_class.from_pretrained(
    File "/usr/local/lib/python3.10/site-packages/transformers/modeling_utils.py", line 3832, in from_pretrained
    model = cls(config, *model_args, **model_kwargs)
    File "/home/user/.cache/huggingface/modules/transformers_modules/ragavsachdeva/magiv2/b071df76be422af52c298980286b3fd751d00fd3/modelling_magiv2.py", line 31, in init
    self.crop_embedding_model = ViTMAEModel(config.crop_embedding_model_config)
    File "/usr/local/lib/python3.10/site-packages/transformers/models/vit_mae/modeling_vit_mae.py", line 715, in init
    self.embeddings = ViTMAEEmbeddings(config)
    File "/usr/local/lib/python3.10/site-packages/transformers/models/vit_mae/modeling_vit_mae.py", line 210, in init
    self.initialize_weights()
    File "/usr/local/lib/python3.10/site-packages/transformers/models/vit_mae/modeling_vit_mae.py", line 217, in initialize_weights
    self.position_embeddings.data.copy_(torch.from_numpy(pos_embed).float().unsqueeze(0))
    RuntimeError: Numpy is not available

Container logs:

===== Application Startup at 2024-08-28 17:38:26 =====

A module that was compiled using NumPy 1.x cannot be run in
NumPy 2.0.1 as it may crash. To support both 1.x and 2.x
versions of NumPy, modules must be compiled with NumPy 2.0.
Some module may need to rebuild instead e.g. with 'pybind11>=2.12'.

If you are a user of the module, the easiest solution will be to
downgrade to 'numpy<2' or try to upgrade the affected module.
We expect that some modules will need time to support NumPy 2.

Traceback (most recent call last): File "/home/user/app/app.py", line 4, in
from transformers import AutoModel
File "/usr/local/lib/python3.10/site-packages/transformers/init.py", line 26, in
from . import dependency_versions_check
File "/usr/local/lib/python3.10/site-packages/transformers/dependency_versions_check.py", line 16, in
from .utils.versions import require_version, require_version_core
File "/usr/local/lib/python3.10/site-packages/transformers/utils/init.py", line 34, in
from .generic import (
File "/usr/local/lib/python3.10/site-packages/transformers/utils/generic.py", line 462, in
import torch.utils._pytree as _torch_pytree
File "/usr/local/lib/python3.10/site-packages/torch/init.py", line 1477, in
from .functional import * # noqa: F403
File "/usr/local/lib/python3.10/site-packages/torch/functional.py", line 9, in
import torch.nn.functional as F
File "/usr/local/lib/python3.10/site-packages/torch/nn/init.py", line 1, in
from .modules import * # noqa: F403
File "/usr/local/lib/python3.10/site-packages/torch/nn/modules/init.py", line 35, in
from .transformer import TransformerEncoder, TransformerDecoder,
File "/usr/local/lib/python3.10/site-packages/torch/nn/modules/transformer.py", line 20, in
device: torch.device = torch.device(torch._C._get_default_device()), # torch.device('cpu'),
/usr/local/lib/python3.10/site-packages/torch/nn/modules/transformer.py:20: UserWarning: Failed to initialize NumPy: _ARRAY_API not found (Triggered internally at ../torch/csrc/utils/tensor_numpy.cpp:84.)
device: torch.device = torch.device(torch._C._get_default_device()), # torch.device('cpu'),
A new version of the following files was downloaded from https://huggingface.co./ragavsachdeva/magiv2:

  • configuration_magiv2.py
    . Make sure to double-check they do not contain any added malicious code. To avoid downloading new versions of the code file, you can pin a revision.
    A new version of the following files was downloaded from https://huggingface.co./ragavsachdeva/magiv2:
  • utils.py
    . Make sure to double-check they do not contain any added malicious code. To avoid downloading new versions of the code file, you can pin a revision.
    A new version of the following files was downloaded from https://huggingface.co./ragavsachdeva/magiv2:
  • processing_magiv2.py
    . Make sure to double-check they do not contain any added malicious code. To avoid downloading new versions of the code file, you can pin a revision.
    A new version of the following files was downloaded from https://huggingface.co./ragavsachdeva/magiv2:
  • modelling_magiv2.py
  • utils.py
  • processing_magiv2.py
    . Make sure to double-check they do not contain any added malicious code. To avoid downloading new versions of the code file, you can pin a revision.
    /usr/local/lib/python3.10/site-packages/transformers/tokenization_utils_base.py:1601: FutureWarning: clean_up_tokenization_spaces was not set. It will be set to True by default. This behavior will be depracted in transformers v4.45, and will be then set to False by default. For more details check this issue: https://github.com/huggingface/transformers/issues/31884
    warnings.warn(
    Traceback (most recent call last):
    File "/home/user/app/app.py", line 10, in
    model = AutoModel.from_pretrained("ragavsachdeva/magiv2", trust_remote_code=True).cuda().eval()
    File "/usr/local/lib/python3.10/site-packages/transformers/models/auto/auto_factory.py", line 559, in from_pretrained
    return model_class.from_pretrained(
    File "/usr/local/lib/python3.10/site-packages/transformers/modeling_utils.py", line 3832, in from_pretrained
    model = cls(config, *model_args, **model_kwargs)
    File "/home/user/.cache/huggingface/modules/transformers_modules/ragavsachdeva/magiv2/b071df76be422af52c298980286b3fd751d00fd3/modelling_magiv2.py", line 31, in init
    self.crop_embedding_model = ViTMAEModel(config.crop_embedding_model_config)
    File "/usr/local/lib/python3.10/site-packages/transformers/models/vit_mae/modeling_vit_mae.py", line 715, in init
    self.embeddings = ViTMAEEmbeddings(config)
    File "/usr/local/lib/python3.10/site-packages/transformers/models/vit_mae/modeling_vit_mae.py", line 210, in init
    self.initialize_weights()
    File "/usr/local/lib/python3.10/site-packages/transformers/models/vit_mae/modeling_vit_mae.py", line 217, in initialize_weights
    self.position_embeddings.data.copy_(torch.from_numpy(pos_embed).float().unsqueeze(0))
    RuntimeError: Numpy is not available

A module that was compiled using NumPy 1.x cannot be run in
NumPy 2.0.1 as it may crash. To support both 1.x and 2.x
versions of NumPy, modules must be compiled with NumPy 2.0.
Some module may need to rebuild instead e.g. with 'pybind11>=2.12'.

If you are a user of the module, the easiest solution will be to
downgrade to 'numpy<2' or try to upgrade the affected module.
We expect that some modules will need time to support NumPy 2.

Traceback (most recent call last): File "/home/user/app/app.py", line 4, in
from transformers import AutoModel
File "/usr/local/lib/python3.10/site-packages/transformers/init.py", line 26, in
from . import dependency_versions_check
File "/usr/local/lib/python3.10/site-packages/transformers/dependency_versions_check.py", line 16, in
from .utils.versions import require_version, require_version_core
File "/usr/local/lib/python3.10/site-packages/transformers/utils/init.py", line 34, in
from .generic import (
File "/usr/local/lib/python3.10/site-packages/transformers/utils/generic.py", line 462, in
import torch.utils._pytree as _torch_pytree
File "/usr/local/lib/python3.10/site-packages/torch/init.py", line 1477, in
from .functional import * # noqa: F403
File "/usr/local/lib/python3.10/site-packages/torch/functional.py", line 9, in
import torch.nn.functional as F
File "/usr/local/lib/python3.10/site-packages/torch/nn/init.py", line 1, in
from .modules import * # noqa: F403
File "/usr/local/lib/python3.10/site-packages/torch/nn/modules/init.py", line 35, in
from .transformer import TransformerEncoder, TransformerDecoder,
File "/usr/local/lib/python3.10/site-packages/torch/nn/modules/transformer.py", line 20, in
device: torch.device = torch.device(torch._C._get_default_device()), # torch.device('cpu'),
/usr/local/lib/python3.10/site-packages/torch/nn/modules/transformer.py:20: UserWarning: Failed to initialize NumPy: _ARRAY_API not found (Triggered internally at ../torch/csrc/utils/tensor_numpy.cpp:84.)
device: torch.device = torch.device(torch._C._get_default_device()), # torch.device('cpu'),
A new version of the following files was downloaded from https://huggingface.co./ragavsachdeva/magiv2:

  • configuration_magiv2.py
    . Make sure to double-check they do not contain any added malicious code. To avoid downloading new versions of the code file, you can pin a revision.
    A new version of the following files was downloaded from https://huggingface.co./ragavsachdeva/magiv2:
  • utils.py
    . Make sure to double-check they do not contain any added malicious code. To avoid downloading new versions of the code file, you can pin a revision.
    A new version of the following files was downloaded from https://huggingface.co./ragavsachdeva/magiv2:
  • processing_magiv2.py
    . Make sure to double-check they do not contain any added malicious code. To avoid downloading new versions of the code file, you can pin a revision.
    A new version of the following files was downloaded from https://huggingface.co./ragavsachdeva/magiv2:
  • modelling_magiv2.py
  • utils.py
  • processing_magiv2.py
    . Make sure to double-check they do not contain any added malicious code. To avoid downloading new versions of the code file, you can pin a revision.
    /usr/local/lib/python3.10/site-packages/transformers/tokenization_utils_base.py:1601: FutureWarning: clean_up_tokenization_spaces was not set. It will be set to True by default. This behavior will be depracted in transformers v4.45, and will be then set to False by default. For more details check this issue: https://github.com/huggingface/transformers/issues/31884
    warnings.warn(
    Traceback (most recent call last):
    File "/home/user/app/app.py", line 10, in
    model = AutoModel.from_pretrained("ragavsachdeva/magiv2", trust_remote_code=True).cuda().eval()
    File "/usr/local/lib/python3.10/site-packages/transformers/models/auto/auto_factory.py", line 559, in from_pretrained
    return model_class.from_pretrained(
    File "/usr/local/lib/python3.10/site-packages/transformers/modeling_utils.py", line 3832, in from_pretrained
    model = cls(config, *model_args, **model_kwargs)
    File "/home/user/.cache/huggingface/modules/transformers_modules/ragavsachdeva/magiv2/b071df76be422af52c298980286b3fd751d00fd3/modelling_magiv2.py", line 31, in init
    self.crop_embedding_model = ViTMAEModel(config.crop_embedding_model_config)
    File "/usr/local/lib/python3.10/site-packages/transformers/models/vit_mae/modeling_vit_mae.py", line 715, in init
    self.embeddings = ViTMAEEmbeddings(config)
    File "/usr/local/lib/python3.10/site-packages/transformers/models/vit_mae/modeling_vit_mae.py", line 210, in init
    self.initialize_weights()
    File "/usr/local/lib/python3.10/site-packages/transformers/models/vit_mae/modeling_vit_mae.py", line 217, in initialize_weights
    self.position_embeddings.data.copy_(torch.from_numpy(pos_embed).float().unsqueeze(0))
    RuntimeError: Numpy is not available

Hi, thanks for flagging this. Should be fixed now.

ragavsachdeva changed discussion status to closed

Sign up or log in to comment