runtime error
Exit code: 1. Reason: config.json: 0%| | 0.00/1.28k [00:00<?, ?B/s][A config.json: 100%|██████████| 1.28k/1.28k [00:00<00:00, 9.56MB/s] Traceback (most recent call last): File "/usr/local/lib/python3.10/site-packages/transformers/models/auto/configuration_auto.py", line 1071, in from_pretrained config_class = CONFIG_MAPPING[config_dict["model_type"]] File "/usr/local/lib/python3.10/site-packages/transformers/models/auto/configuration_auto.py", line 773, in __getitem__ raise KeyError(key) KeyError: 'multi_modality' During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/home/user/app/app.py", line 10, in <module> model = AutoModelForCausalLM.from_pretrained("deepseek-ai/Janus-Pro-7B") File "/usr/local/lib/python3.10/site-packages/transformers/models/auto/auto_factory.py", line 526, in from_pretrained config, kwargs = AutoConfig.from_pretrained( File "/usr/local/lib/python3.10/site-packages/transformers/models/auto/configuration_auto.py", line 1073, in from_pretrained raise ValueError( ValueError: The checkpoint you are trying to load has model type `multi_modality` but Transformers does not recognize this architecture. This could be because of an issue with the checkpoint, or because your version of Transformers is out of date. You can update Transformers with the command `pip install --upgrade transformers`. If this does not work, and the checkpoint is very new, then there may not be a release version that supports this model yet. In this case, you can get the most up-to-date code by installing Transformers from source with the command `pip install git+https://github.com/huggingface/transformers.git`
Container logs:
Fetching error logs...