Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[ERROR] tuplet index out of range #90

Open
Gohloum opened this issue Oct 29, 2024 · 2 comments
Open

[ERROR] tuplet index out of range #90

Gohloum opened this issue Oct 29, 2024 · 2 comments

Comments

@Gohloum
Copy link

Gohloum commented Oct 29, 2024

Hello,

I am getting this error when I try and run the node. I have installed requirements and everything seems to be installed correctly. I also performed full updates and stopped/started just to make sure. Here is the error I am getting:

tuple index out of range

ComfyUI Error Report

Error Details

  • Node Type: Florence2Run
  • Exception Type: IndexError
  • Exception Message: tuple index out of range

Stack Trace

  File "/Volumes/OWC Envoy Pro FX/_AI Tools/ComfyUI/execution.py", line 323, in execute
    output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
                                           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

  File "/Volumes/OWC Envoy Pro FX/_AI Tools/ComfyUI/execution.py", line 198, in get_output_data
    return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
                    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

  File "/Volumes/OWC Envoy Pro FX/_AI Tools/ComfyUI/execution.py", line 169, in _map_node_over_list
    process_inputs(input_dict, i)

  File "/Volumes/OWC Envoy Pro FX/_AI Tools/ComfyUI/execution.py", line 158, in process_inputs
    results.append(getattr(obj, func)(**inputs))
                   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^

  File "/Volumes/OWC Envoy Pro FX/_AI Tools/ComfyUI/custom_nodes/ComfyUI-Florence2/nodes.py", line 296, in encode
    generated_ids = model.generate(
                    ^^^^^^^^^^^^^^^

  File "/Users/tj/.cache/huggingface/modules/transformers_modules/Florence-2-base/modeling_florence2.py", line 2796, in generate
    return self.language_model.generate(
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

  File "/Users/tj/miniconda3/lib/python3.12/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^

  File "/Users/tj/miniconda3/lib/python3.12/site-packages/transformers/generation/utils.py", line 1828, in generate
    self._prepare_special_tokens(generation_config, kwargs_has_attention_mask, device=device)

  File "/Users/tj/miniconda3/lib/python3.12/site-packages/transformers/generation/utils.py", line 1677, in _prepare_special_tokens
    and isin_mps_friendly(elements=eos_token_tensor, test_elements=pad_token_tensor).any()
        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

  File "/Users/tj/miniconda3/lib/python3.12/site-packages/transformers/pytorch_utils.py", line 325, in isin_mps_friendly
    return elements.tile(test_elements.shape[0], 1).eq(test_elements.unsqueeze(1)).sum(dim=0).bool().squeeze()
                         ~~~~~~~~~~~~~~~~~~~^^^

System Information

  • ComfyUI Version: v0.2.5-3-g65a8659
  • Arguments: main.py
  • OS: posix
  • Python Version: 3.12.4 | packaged by Anaconda, Inc. | (main, Jun 18 2024, 10:07:17) [Clang 14.0.6 ]
  • Embedded Python: false
  • PyTorch Version: 2.3.1

Devices

  • Name: mps
    • Type: mps
    • VRAM Total: 68719476736
    • VRAM Free: 37896896512
    • Torch VRAM Total: 68719476736
    • Torch VRAM Free: 37896896512
@SwordFaith
Copy link

Got same problem

@SwordFaith
Copy link

It's due to Florence2 model can't handle mps backend. Modify model device to cuda or cpu setting works for me.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants