Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

TypeError: scaled_dot_product_attention(): argument 'query' (position 1) must be Tensor, not list #19

Open
sneccc opened this issue Jul 31, 2024 · 10 comments

Comments

@sneccc
Copy link

sneccc commented Jul 31, 2024

workflow.json


To see the GUI go to: http://127.0.0.1:8188
got prompt
[rgthree] Using rgthree's optimized recursive execution.
[rgthree] First run patching recursive_output_delete_if_changed and recursive_will_execute.
[rgthree] Note: If execution seems broken due to forward ComfyUI changes, you can disable the optimization from rgthree settings in ComfyUI.
using sdpa for attention
</s><s><s><s>face<loc_155><loc_3><loc_944><loc_998><loc_156><loc_4><loc_771><loc_443><loc_229><loc_98><loc_758><loc_435></s>
match index: 0 in mask_indexes: ['0', '1', '2']
match index: 1 in mask_indexes: ['0', '1', '2']
match index: 2 in mask_indexes: ['0', '1', '2']
Offloading model...
Type of data: <class 'list'>
Data: [[[72.77400207519531, 2.1875, 442.0260009765625, 624.0625], [73.24199676513672, 2.8125, 361.0619812011719, 277.1875], [107.40599822998047, 61.5625, 354.9779968261719, 272.1875]]]
Indexes: [1]
Coordinates: [{"x": 217, "y": 140}]
positive coordinates:  [[217 140]]
For numpy array image, we assume (HxWxC) format
Computing image embeddings for the provided image...
Image embeddings computed.
!!! Exception during processing!!! scaled_dot_product_attention(): argument 'query' (position 1) must be Tensor, not list
Traceback (most recent call last):
  File "G:\comfy_dev\ComfyUI\execution.py", line 152, in recursive_execute
    output_data, output_ui = get_output_data(obj, input_data_all)
  File "G:\comfy_dev\ComfyUI\execution.py", line 82, in get_output_data
    return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
  File "G:\comfy_dev\ComfyUI\execution.py", line 75, in map_node_over_list
    results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
  File "G:\comfy_dev\ComfyUI\custom_nodes\ComfyUI-segment-anything-2\nodes.py", line 217, in segment
    masks, scores, logits = model.predict(
  File "G:\comfy_dev\ComfyUI\custom_nodes\ComfyUI-segment-anything-2\sam2\sam2_image_predictor.py", line 271, in predict
    masks, iou_predictions, low_res_masks = self._predict(
  File "C:\Users\Daniel\anaconda3\envs\comfy_conda\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
  File "G:\comfy_dev\ComfyUI\custom_nodes\ComfyUI-segment-anything-2\sam2\sam2_image_predictor.py", line 400, in _predict
    low_res_masks, iou_predictions, _, _ = self.model.sam_mask_decoder(
  File "C:\Users\Daniel\anaconda3\envs\comfy_conda\lib\site-packages\torch\nn\modules\module.py", line 1511, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "C:\Users\Daniel\anaconda3\envs\comfy_conda\lib\site-packages\torch\nn\modules\module.py", line 1520, in _call_impl
    return forward_call(*args, **kwargs)
  File "G:\comfy_dev\ComfyUI\custom_nodes\ComfyUI-segment-anything-2\sam2\modeling\sam\mask_decoder.py", line 136, in forward
    masks, iou_pred, mask_tokens_out, object_score_logits = self.predict_masks(
  File "G:\comfy_dev\ComfyUI\custom_nodes\ComfyUI-segment-anything-2\sam2\modeling\sam\mask_decoder.py", line 213, in predict_masks
    hs, src = self.transformer(src, pos_src, tokens)
  File "C:\Users\Daniel\anaconda3\envs\comfy_conda\lib\site-packages\torch\nn\modules\module.py", line 1511, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "C:\Users\Daniel\anaconda3\envs\comfy_conda\lib\site-packages\torch\nn\modules\module.py", line 1520, in _call_impl
    return forward_call(*args, **kwargs)
  File "G:\comfy_dev\ComfyUI\custom_nodes\ComfyUI-segment-anything-2\sam2\modeling\sam\transformer.py", line 105, in forward
    queries, keys = layer(
  File "C:\Users\Daniel\anaconda3\envs\comfy_conda\lib\site-packages\torch\nn\modules\module.py", line 1511, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "C:\Users\Daniel\anaconda3\envs\comfy_conda\lib\site-packages\torch\nn\modules\module.py", line 1520, in _call_impl
    return forward_call(*args, **kwargs)
  File "G:\comfy_dev\ComfyUI\custom_nodes\ComfyUI-segment-anything-2\sam2\modeling\sam\transformer.py", line 171, in forward
    queries = self.self_attn(q=queries, k=queries, v=queries)
  File "C:\Users\Daniel\anaconda3\envs\comfy_conda\lib\site-packages\torch\nn\modules\module.py", line 1511, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "C:\Users\Daniel\anaconda3\envs\comfy_conda\lib\site-packages\torch\nn\modules\module.py", line 1520, in _call_impl
    return forward_call(*args, **kwargs)
  File "G:\comfy_dev\ComfyUI\custom_nodes\ComfyUI-segment-anything-2\sam2\modeling\sam\transformer.py", line 261, in forward
    with sdpa_kernel(backends):
TypeError: scaled_dot_product_attention(): argument 'query' (position 1) must be Tensor, not list
@Omarsmsm
Copy link

Omarsmsm commented Aug 1, 2024

got this also

@kijai
Copy link
Owner

kijai commented Aug 1, 2024

Torch version? Python version? What GPU? Does it work on CPU?

@Omarsmsm
Copy link

Omarsmsm commented Aug 1, 2024

Mine is python 3.11.8
Total VRAM 16376 MB, total RAM 31962 MB
pytorch version: 2.2.2+cu121
xformers version: 0.0.25.post1
Set vram state to: NORMAL_VRAM
Device: cuda:0 NVIDIA GeForce RT (4090)

Not working on CPU or GPU.
CPU only accepts FP32 but still same error.

@PenguinTeo
Copy link

Torch version? Python version? What GPU? Does it work on CPU?

Hello,KJ.
After testing, Torch version 2.2.0 cannot be used, but 2.3.0 can run

@PenguinTeo
Copy link

Torch version? Python version? What GPU? Does it work on CPU?

Perhaps you can consider adapting to Torch 2.2.0+CU118

@kane1718
Copy link

kane1718 commented Aug 1, 2024

I have the same problem with image processing

@kijai
Copy link
Owner

kijai commented Aug 1, 2024

The original code had marked torch 2.3.1 as minimum. What's the reason to use old torch version? Current torch stable is at 2.4.0 already, which is also what ComfyUI itself comes with now.

@pppyl
Copy link

pppyl commented Aug 1, 2024

The original code had marked torch 2.3.1 as minimum. What's the reason to use old torch version? Current torch stable is at 2.4.0 already, which is also what ComfyUI itself comes with now.

Perhaps some other niche plugins require torch 2.2.0. Personally, I think it's okay to uninstall those niche plugins that require torch 2.2.0

@kijai
Copy link
Owner

kijai commented Aug 1, 2024

Should work with 2.2.0 now anyway, still recommend using newer version.

@skydam
Copy link

skydam commented Aug 1, 2024

I was still having trouble with the old 2.2.0. For those of you who are also having this problem, run update_comfyui_and_python_dependencies.bat in your \ComfyUI_windows_portable\update directory. It will update your torch to 2.4.0.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

7 participants