runtime error
Exit code: 1. Reason: loading shards: 100%|██████████| 4/4 [00:17<00:00, 4.48s/it] Loading checkpoint shards: 0%| | 0/4 [00:00<?, ?it/s][A Loading checkpoint shards: 75%|███████▌ | 3/4 [00:01<00:00, 2.80it/s][A Loading checkpoint shards: 100%|██████████| 4/4 [00:01<00:00, 2.90it/s] generation_config.json: 0%| | 0.00/113 [00:00<?, ?B/s][A generation_config.json: 100%|██████████| 113/113 [00:00<00:00, 771kB/s] Device set to use cuda:0 /home/user/app/src/model_load.py:31: LangChainDeprecationWarning: The class `HuggingFacePipeline` was deprecated in LangChain 0.0.37 and will be removed in 1.0. An updated version of the class exists in the :class:`~langchain-huggingface package and should be used instead. To use it run `pip install -U :class:`~langchain-huggingface` and import as `from :class:`~langchain_huggingface import HuggingFacePipeline``. llm = HuggingFacePipeline(pipeline=text_generation_pipeline) Traceback (most recent call last): File "/home/user/app/app.py", line 58, in <module> main() File "/home/user/app/app.py", line 40, in main qa_chain = src.model_load.load_model() File "/usr/local/lib/python3.10/site-packages/spaces/zero/wrappers.py", line 207, in gradio_handler res = worker.res_queue.get() File "/usr/local/lib/python3.10/multiprocessing/queues.py", line 367, in get return _ForkingPickler.loads(res) File "/usr/local/lib/python3.10/site-packages/torch/multiprocessing/reductions.py", line 180, in rebuild_cuda_tensor torch.cuda._lazy_init() File "/usr/local/lib/python3.10/site-packages/torch/cuda/__init__.py", line 319, in _lazy_init torch._C._cuda_init() File "/usr/local/lib/python3.10/site-packages/spaces/zero/torch/patching.py", line 269, in _cuda_init_raise raise RuntimeError( RuntimeError: CUDA must not be initialized in the main process on Spaces with Stateless GPU environment. You can look at this Stacktrace to find out which part of your code triggered a CUDA init
Container logs:
Fetching error logs...