Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[bug]: [5.4.2] Assertion error related to text_encoder while trying to generate with Flux Dev Quantized #7370

Open
1 task done
IcePanther opened this issue Nov 22, 2024 · 0 comments
Labels
bug Something isn't working

Comments

@IcePanther
Copy link

Is there an existing issue for this problem?

  • I have searched the existing issues

Operating system

Windows

GPU vendor

Nvidia (CUDA)

GPU model

3080Ti laptop

GPU VRAM

16GB

Version number

5.4.2

Browser

MS Edge 131.0.2903.51

Python dependencies

{
  "accelerate": "1.0.1",
  "compel": "2.0.2",
  "cuda": "12.4",
  "diffusers": "0.31.0",
  "numpy": "1.26.4",
  "opencv": "4.9.0.80",
  "onnx": "1.16.1",
  "pillow": "11.0.0",
  "python": "3.10.9",
  "torch": "2.4.1+cu124",
  "torchvision": "0.19.1+cu124",
  "transformers": "4.41.1",
  "xformers": null
}

What happened

Clicked generate, got a "Server Error" popup.
In the log there is an Assertion Error regarding the text encoder.

Log :

[2024-11-22 01:50:16,777]::[uvicorn.access]::INFO --> 127.0.0.1:62664 - "POST /api/v1/queue/default/enqueue_batch HTTP/1.1" 200
[2024-11-22 01:50:16,805]::[uvicorn.access]::INFO --> 127.0.0.1:62664 - "GET /api/v1/queue/default/current HTTP/1.1" 200
[2024-11-22 01:50:16,812]::[uvicorn.access]::INFO --> 127.0.0.1:62665 - "GET /api/v1/queue/default/counts_by_destination?destination=canvas HTTP/1.1" 200
[2024-11-22 01:50:16,817]::[uvicorn.access]::INFO --> 127.0.0.1:62666 - "GET /api/v1/queue/default/list HTTP/1.1" 200
[2024-11-22 01:50:16,817]::[InvokeAI]::ERROR --> Error while invoking session 86c1ba96-3f6a-4c43-8000-f78ba90ec9c1, invocation 46c25276-78f0-478e-b718-92a6c57a42db (flux_text_encoder):
[2024-11-22 01:50:16,817]::[InvokeAI]::ERROR --> Traceback (most recent call last):
  File "C:\AI\InvokeAI\.venv\lib\site-packages\invokeai\app\services\session_processor\session_processor_default.py", line 129, in run_node
    output = invocation.invoke_internal(context=context, services=self._services)
  File "C:\AI\InvokeAI\.venv\lib\site-packages\invokeai\app\invocations\baseinvocation.py", line 300, in invoke_internal
    output = self.invoke(context)
  File "C:\AI\InvokeAI\.venv\lib\site-packages\torch\utils\_contextlib.py", line 116, in decorate_context
    return func(*args, **kwargs)
  File "C:\AI\InvokeAI\.venv\lib\site-packages\invokeai\app\invocations\flux_text_encoder.py", line 50, in invoke
    t5_embeddings = self._t5_encode(context)
  File "C:\AI\InvokeAI\.venv\lib\site-packages\invokeai\app\invocations\flux_text_encoder.py", line 69, in _t5_encode
    assert isinstance(t5_text_encoder, T5EncoderModel)
AssertionError

[2024-11-22 01:50:16,851]::[uvicorn.access]::INFO --> 127.0.0.1:62666 - "GET /assets/images/invoke-alert-favicon.svg HTTP/1.1" 200
[2024-11-22 01:50:16,860]::[InvokeAI]::INFO --> Graph stats: 86c1ba96-3f6a-4c43-8000-f78ba90ec9c1
                          Node   Calls   Seconds  VRAM Used
             flux_model_loader       1    0.000s     1.310G
             flux_text_encoder       1    0.012s     1.310G
TOTAL GRAPH EXECUTION TIME:   0.012s
TOTAL GRAPH WALL TIME:   0.012s
RAM used by InvokeAI process: 9.59G (+0.000G)
RAM used to load models: 1.29G
VRAM in use: 1.310G
RAM cache statistics:
   Model cache hits: 2
   Model cache misses: 0
   Models cached: 4
   Models cleared from cache: 0
   Cache high water mark: 6.05/15.00G

What you expected to happen

Generation goes through

How to reproduce the problem

Select Flux Dev Quantized, type in prompt, hit Generate

Additional context

No LORAs or anything, bare FLUX.dev model loaded from starter packs in the model manager.
T5 encoder is: t5_bnb_int8_quantized_encoder

Discord username

No response

@IcePanther IcePanther added the bug Something isn't working label Nov 22, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

1 participant