runtime error

Exit code: 1. Reason: The cache for model files in Transformers v4.22.0 has been updated. Migrating your old cache. This is a one-time only operation. You can interrupt this and resume the migration later on by calling `transformers.utils.move_cache()`. 0it [00:00, ?it/s] 0it [00:00, ?it/s] config_promax.json: 0%| | 0.00/1.26k [00:00<?, ?B/s] config_promax.json: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 1.26k/1.26k [00:00<00:00, 13.5MB/s] (…)ffusion_pytorch_model_promax.safetensors: 0%| | 0.00/2.51G [00:00<?, ?B/s] (…)ffusion_pytorch_model_promax.safetensors: 9%|β–‰ | 231M/2.51G [00:01<00:09, 228MB/s] (…)ffusion_pytorch_model_promax.safetensors: 48%|β–ˆβ–ˆβ–ˆβ–ˆβ–Š | 1.21G/2.51G [00:02<00:01, 665MB/s] (…)ffusion_pytorch_model_promax.safetensors: 99%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‰| 2.49G/2.51G [00:03<00:00, 947MB/s] (…)ffusion_pytorch_model_promax.safetensors: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‰| 2.51G/2.51G [00:03<00:00, 797MB/s] Traceback (most recent call last): File "/home/user/app/app.py", line 30, in <module> model.to(device="cuda", dtype=torch.float16) File "/usr/local/lib/python3.10/site-packages/diffusers/models/modeling_utils.py", line 1077, in to return super().to(*args, **kwargs) File "/usr/local/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1340, in to return self._apply(convert) File "/usr/local/lib/python3.10/site-packages/torch/nn/modules/module.py", line 900, in _apply module._apply(fn) File "/usr/local/lib/python3.10/site-packages/torch/nn/modules/module.py", line 927, in _apply param_applied = fn(param) File "/usr/local/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1326, in convert return t.to( File "/usr/local/lib/python3.10/site-packages/torch/cuda/__init__.py", line 319, in _lazy_init torch._C._cuda_init() RuntimeError: Found no NVIDIA driver on your system. Please check that you have an NVIDIA GPU and installed a driver from http://www.nvidia.com/Download/index.aspx

Container logs:

Fetching error logs...