English

IndexError: list index out of range

#1
by CR2022 - opened

2023-06-25 03:55:27 INFO:Cache capacity is 0 bytes
llama.cpp: loading model from models/wizardlm-33b-v1.0-uncensored.ggmlv3.q5_K_S/wizardlm-33b-v1.0-uncensored.ggmlv3.q5_K_S.bin
llama_model_load_internal: format = ggjt v3 (latest)
llama_model_load_internal: n_vocab = 32000
llama_model_load_internal: n_ctx = 2048
llama_model_load_internal: n_embd = 6656
llama_model_load_internal: n_mult = 256
llama_model_load_internal: n_head = 52
llama_model_load_internal: n_layer = 60
llama_model_load_internal: n_rot = 128
llama_model_load_internal: ftype = 16 (mostly Q5_K - Small)
llama_model_load_internal: n_ff = 17920
llama_model_load_internal: n_parts = 1
llama_model_load_internal: model size = 30B
llama_model_load_internal: ggml ctx size = 0.13 MB
llama_model_load_internal: mem required = 23661.29 MB (+ 3124.00 MB per state)
....................................................................................................
llama_init_from_file: kv self size = 3120.00 MB
AVX = 1 | AVX2 = 1 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | FMA = 1 | NEON = 0 | ARM_FMA = 0 | F16C = 1 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 0 | SSE3 = 1 | VSX = 0 |
2023-06-25 03:55:28 INFO:Loaded the model in 0.60 seconds.
5266324524.png
Traceback (most recent call last):
File "/root/miniconda3/envs/textgen/lib/python3.10/site-packages/gradio/routes.py", line 427, in run_predict
output = await app.get_blocks().process_api(
File "/root/miniconda3/envs/textgen/lib/python3.10/site-packages/gradio/blocks.py", line 1323, in process_api
result = await self.call_function(
File "/root/miniconda3/envs/textgen/lib/python3.10/site-packages/gradio/blocks.py", line 1067, in call_function
prediction = await utils.async_iteration(iterator)
File "/root/miniconda3/envs/textgen/lib/python3.10/site-packages/gradio/utils.py", line 336, in async_iteration
return await iterator.anext()
File "/root/miniconda3/envs/textgen/lib/python3.10/site-packages/gradio/utils.py", line 329, in anext
return await anyio.to_thread.run_sync(
File "/root/miniconda3/envs/textgen/lib/python3.10/site-packages/anyio/to_thread.py", line 33, in run_sync
return await get_asynclib().run_sync_in_worker_thread(
File "/root/miniconda3/envs/textgen/lib/python3.10/site-packages/anyio/_backends/_asyncio.py", line 877, in run_sync_in_worker_thread
return await future
File "/root/miniconda3/envs/textgen/lib/python3.10/site-packages/anyio/_backends/_asyncio.py", line 807, in run
result = context.run(func, *args)
File "/root/miniconda3/envs/textgen/lib/python3.10/site-packages/gradio/utils.py", line 312, in run_sync_iterator_async
return next(iterator)
File "/root/text-generation-webui/modules/chat.py", line 295, in generate_chat_reply_wrapper
for i, history in enumerate(generate_chat_reply(text, shared.history, state, regenerate, _continue, loading_message=True)):
File "/root/text-generation-webui/modules/chat.py", line 280, in generate_chat_reply
for history in chatbot_wrapper(text, history, state, regenerate=regenerate, _continue=_continue, loading_message=loading_message):
File "/root/text-generation-webui/modules/chat.py", line 164, in chatbot_wrapper
stopping_strings = get_stopping_strings(state)
File "/root/text-generation-webui/modules/chat.py", line 128, in get_stopping_strings
state['turn_template'].split('<|user-message|>')[1].split('<|bot|>')[0] + '<|bot|>',
IndexError: list index out of range

with llama.cpp works fine

It should also work with the latest version of text generation according to the model card:

"They are now also compatible with recent releases of text-generation-webui, KoboldCpp, llama-cpp-python and ctransformers. Other tools and libraries may or may not be compatible - check their documentation if in doubt."

Yes works fine with text-generation-webui

That error you've got is a bug with the prompt template in the UI, not related to the model itself. Try updating text-generation-webui if you haven't already, it might be fixed by now.

Yes works fine with text-generation-webui

That error you've got is a bug with the prompt template in the UI, not related to the model itself. Try updating text-generation-webui if you haven't already, it might be fixed by now.

Ok thank you I will check it out later today when I can launch text-generation-webui and if it works I will close this discussion.

Yes works fine with text-generation-webui

That error you've got is a bug with the prompt template in the UI, not related to the model itself. Try updating text-generation-webui if you haven't already, it might be fixed by now.

Text generation web ui is all up to date but still the same error in terminal.

When I check the chat settings/instruction template it gives errors in red. I tried the correct modes chat-instruct and instruct but both will give red errors.

The model works in chat function so the problem must be with text generation web ui and not the model.

CR2022 changed discussion status to closed

Sign up or log in to comment