Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

"IndexError: list index out of range" with chat-instruct #6622

Open
1 task done
Darkzarich opened this issue Dec 31, 2024 · 0 comments
Open
1 task done

"IndexError: list index out of range" with chat-instruct #6622

Darkzarich opened this issue Dec 31, 2024 · 0 comments
Labels
bug Something isn't working

Comments

@Darkzarich
Copy link

Darkzarich commented Dec 31, 2024

Describe the bug

After updating to version (v2.0) previously working Llama-3.1 Instruct GGUF model stopped working in the chat-instruct mode. It still worked in just chat mode. The message just doesn't appear in chat-instruct mode and there is an error in the terminal.

Is there an existing issue for this?

  • I have searched the existing issues

Reproduction

  1. Update to \ get the latest "text-generation-webui" version (v2.0)
  2. Load DarkIdol-Llama-3.1-8B-Instruct-1.2-Uncensored-Q4_K_M-imat.gguf model (it worked fine before the update)
  3. By default it will say: It seems to be an instruction-following model with template "Custom (obtained from model metadata)". In the chat tab, instruct or chat-instruct modes should be used.
  4. Go to the chat tab, try to chat with a character (that worked before)
  5. After sending the message doesn't appear in the UI, there is the error in the terminal I provided

Not sure it mattered but some options:

  • loader: llama.cpp
  • n_ctx: 8192
  • flash_attn
  • tensorcores
  • n_batch: 512

Screenshot

image

Logs

22:00:59-612978 INFO     Loaded "DarkIdol-Llama-3.1-8B-Instruct-1.2-Uncensored-Q4_K_M-imat.gguf" in 10.85 seconds.
22:00:59-615979 INFO     LOADER: "llama.cpp"
22:00:59-616980 INFO     TRUNCATION LENGTH: 8192
22:00:59-617980 INFO     INSTRUCTION TEMPLATE: "Custom (obtained from model metadata)"
Traceback (most recent call last):
  File "Y:\text-generation-webui-main\text-generation-webui-main\installer_files\env\Lib\site-packages\gradio\queueing.py", line 541, in process_events
    response = await route_utils.call_process_api(
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "Y:\text-generation-webui-main\text-generation-webui-main\installer_files\env\Lib\site-packages\gradio\route_utils.py", line 276, in call_process_api
    output = await app.get_blocks().process_api(
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "Y:\text-generation-webui-main\text-generation-webui-main\installer_files\env\Lib\site-packages\gradio\blocks.py", line 1928, in process_api
    result = await self.call_function(
             ^^^^^^^^^^^^^^^^^^^^^^^^^
  File "Y:\text-generation-webui-main\text-generation-webui-main\installer_files\env\Lib\site-packages\gradio\blocks.py", line 1526, in call_function
    prediction = await utils.async_iteration(iterator)
                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "Y:\text-generation-webui-main\text-generation-webui-main\installer_files\env\Lib\site-packages\gradio\utils.py", line 657, in async_iteration
    return await iterator.__anext__()
           ^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "Y:\text-generation-webui-main\text-generation-webui-main\installer_files\env\Lib\site-packages\gradio\utils.py", line 650, in __anext__
    return await anyio.to_thread.run_sync(
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "Y:\text-generation-webui-main\text-generation-webui-main\installer_files\env\Lib\site-packages\anyio\to_thread.py", line 56, in run_sync
    return await get_async_backend().run_sync_in_worker_thread(
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "Y:\text-generation-webui-main\text-generation-webui-main\installer_files\env\Lib\site-packages\anyio\_backends\_asyncio.py", line 2505, in run_sync_in_worker_thread
    return await future
           ^^^^^^^^^^^^
  File "Y:\text-generation-webui-main\text-generation-webui-main\installer_files\env\Lib\site-packages\anyio\_backends\_asyncio.py", line 1005, in run
    result = context.run(func, *args)
             ^^^^^^^^^^^^^^^^^^^^^^^^
  File "Y:\text-generation-webui-main\text-generation-webui-main\installer_files\env\Lib\site-packages\gradio\utils.py", line 633, in run_sync_iterator_async
    return next(iterator)
           ^^^^^^^^^^^^^^
  File "Y:\text-generation-webui-main\text-generation-webui-main\installer_files\env\Lib\site-packages\gradio\utils.py", line 816, in gen_wrapper
    response = next(iterator)
               ^^^^^^^^^^^^^^
  File "Y:\text-generation-webui-main\text-generation-webui-main\modules\chat.py", line 443, in generate_chat_reply_wrapper
    for i, history in enumerate(generate_chat_reply(text, state, regenerate, _continue, loading_message=True, for_ui=True)):
  File "Y:\text-generation-webui-main\text-generation-webui-main\modules\chat.py", line 410, in generate_chat_reply
    for history in chatbot_wrapper(text, state, regenerate=regenerate, _continue=_continue, loading_message=loading_message, for_ui=for_ui):
  File "Y:\text-generation-webui-main\text-generation-webui-main\modules\chat.py", line 305, in chatbot_wrapper
    stopping_strings = get_stopping_strings(state)
                       ^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "Y:\text-generation-webui-main\text-generation-webui-main\modules\chat.py", line 265, in get_stopping_strings
    prefix_bot, suffix_bot = get_generation_prompt(renderer, impersonate=False)
                             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "Y:\text-generation-webui-main\text-generation-webui-main\modules\chat.py", line 73, in get_generation_prompt
    suffix_plus_prefix = prompt.split("<<|user-message-1|>>")[1].split("<<|user-message-2|>>")[0]
                         ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^^
IndexError: list index out of range

System Info

GPU: GeForce RTX 3070 Ti
CPU: AMD Ryzen 5 5600X 6-Core 
RAM: 32 GB DDR5 RAM
@Darkzarich Darkzarich added the bug Something isn't working label Dec 31, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

1 participant