runtime error
Exit code: 1. Reason: cpu /home/user/app/app.py:46: LangChainDeprecationWarning: The class `HuggingFacePipeline` was deprecated in LangChain 0.0.37 and will be removed in 1.0. An updated version of the class exists in the :class:`~langchain-huggingface package and should be used instead. To use it run `pip install -U :class:`~langchain-huggingface` and import as `from :class:`~langchain_huggingface import HuggingFacePipeline``. llm = HuggingFacePipeline(pipeline=query_pipeline) /home/user/app/app.py:64: LangChainDeprecationWarning: The method `BaseLLM.__call__` was deprecated in langchain-core 0.1.7 and will be removed in 1.0. Use :meth:`~invoke` instead. response = llm(prompt=prompt) Truncation was not explicitly activated but `max_length` is provided a specific value, please use `truncation=True` to explicitly truncate examples to max length. Defaulting to 'longest_first' truncation strategy. If you encode pairs of sequences (GLUE-style) with the tokenizer you can select this strategy more precisely by providing a specific strategy to `truncation`. Setting `pad_token_id` to `eos_token_id`:None for open-end generation. Both `max_new_tokens` (=500) and `max_length`(=6000) seem to have been set. `max_new_tokens` will take precedence. Please refer to the documentation for more information. (https://huggingface.co./docs/transformers/main/en/main_classes/text_generation) Traceback (most recent call last): File "/home/user/app/app.py", line 67, in <module> from IPython.display import display, Markdown ModuleNotFoundError: No module named 'IPython'
Container logs:
Fetching error logs...