Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Protobuf library not found #67

Open
cavalia88 opened this issue Jun 20, 2024 · 5 comments
Open

Protobuf library not found #67

cavalia88 opened this issue Jun 20, 2024 · 5 comments

Comments

@cavalia88
Copy link

Hi,

I'm trying to run Hunyuan Dit version 1.1 on Comfy UI. I installed ComfyUI_ExtraModels" and followed the instructions on the main page. Used the sample workflow on your page but getting the following error when I run a prompt.

I have already installed protobuf library but ComfyUI cannot seem to find it. Any ideas?

model_type V_PREDICTION
flash_attn import failed: No module named 'flash_attn'
    Number of tokens: 4096
HYDiT: clip missing 2 keys (394 extra)
You are using the default legacy behaviour of the <class 'transformers.models.t5.tokenization_t5.T5Tokenizer'>. This is expected, and simply means that the `legacy` (previous) behavior will be used so nothing changes for you. If you want to use the new behaviour, set `legacy=False`. This should only be set if you understand what it means, and thoroughly read the reason why this was added as explained in https://github.com/huggingface/transformers/pull/24565
!!! Exception during processing!!!
T5Converter requires the protobuf library but it was not found in your environment. Checkout the instructions on the
installation page of its repo: https://github.com/protocolbuffers/protobuf/tree/master/python#installation and follow the ones
that match your environment. Please note that you may need to restart your runtime after installation.
@HazmatAI
Copy link

I got the same issue. When checking for protobuf it says is allready installed.

@city96
Copy link
Owner

city96 commented Jun 20, 2024

are you running python with the -s flag? Sometimes it picks up packages installed to the system env without that for some reason

For embedded:

.\python_embeded\python.exe -s -m pip install protobuf

(for standalone just activate the venv/conda and use python -s -m pip install protobuf)

@cavalia88
Copy link
Author

cavalia88 commented Jun 21, 2024

I'm a newbie here, totally not familiar with Python environments etc. Just followed some guides to install ComfyUI. Based on what i can find, yes, i believe it is a venv environment. Attached the relevant launch files for your reference. Now I'm getting another error message about "No module named 'timm'". Use pip install and it says already installed. Think I'm not sure how to install it in the right venv. Help will be much appreciated.

My bat file to launch Comfy in Windows

@echo off
REM Check if venv directory exists
IF NOT EXIST "venv" (
    echo Creating virtual environment...
    python -m venv venv

    REM Activate the virtual environment
    echo Activating virtual environment...
    call venv\Scripts\activate

    REM Upgrade pip to the latest version
    echo Upgrading pip...
    python -m pip install --upgrade pip

    REM Install dependencies from requirements.txt
    IF EXIST "requirements.txt" (
        echo Installing dependencies from requirements.txt...
        pip install -r requirements.txt
    ) ELSE (
        echo requirements.txt not found. Skipping dependency installation.
    )

    REM Install torch, torchvision, and torchaudio with specific index URL
    echo Installing torch, torchvision, and torchaudio...
    pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu118
) ELSE (
    REM Activate the virtual environment
    echo Activating virtual environment...
    call venv\Scripts\activate
)

REM Run the main.py script with --auto-launch argument
echo Running main.py with --auto-launch argument...
python main.py --auto-launch

REM Deactivate the virtual environment
echo Deactivating virtual environment...
deactivate

The venv\Scripts\activate script. The activate.bat below

@echo off

rem This file is UTF-8 encoded, so we need to update the current code page while executing it
for /f "tokens=2 delims=:." %%a in ('"%SystemRoot%\System32\chcp.com"') do (
    set _OLD_CODEPAGE=%%a
)
if defined _OLD_CODEPAGE (
    "%SystemRoot%\System32\chcp.com" 65001 > nul
)

set VIRTUAL_ENV=H:\ComfyUI3\venv

if not defined PROMPT set PROMPT=$P$G

if defined _OLD_VIRTUAL_PROMPT set PROMPT=%_OLD_VIRTUAL_PROMPT%
if defined _OLD_VIRTUAL_PYTHONHOME set PYTHONHOME=%_OLD_VIRTUAL_PYTHONHOME%

set _OLD_VIRTUAL_PROMPT=%PROMPT%
set PROMPT=(venv) %PROMPT%

if defined PYTHONHOME set _OLD_VIRTUAL_PYTHONHOME=%PYTHONHOME%
set PYTHONHOME=

if defined _OLD_VIRTUAL_PATH set PATH=%_OLD_VIRTUAL_PATH%
if not defined _OLD_VIRTUAL_PATH set _OLD_VIRTUAL_PATH=%PATH%

set PATH=%VIRTUAL_ENV%\Scripts;%PATH%
set VIRTUAL_ENV_PROMPT=(venv) 

:END
if defined _OLD_CODEPAGE (
    "%SystemRoot%\System32\chcp.com" %_OLD_CODEPAGE% > nul
    set _OLD_CODEPAGE=
)

Current error i receive:

got prompt
model_type V_PREDICTION
!!! Exception during processing!!! No module named 'timm'
Traceback (most recent call last):
  File "H:\ComfyUI3\execution.py", line 151, in recursive_execute
    output_data, output_ui = get_output_data(obj, input_data_all)
                             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "H:\ComfyUI3\execution.py", line 81, in get_output_data
    return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
                    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "H:\ComfyUI3\execution.py", line 74, in map_node_over_list
    results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
                   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "H:\ComfyUI3\custom_nodes\ComfyUI_ExtraModels\HunYuanDiT\nodes.py", line 26, in load_checkpoint
    model = load_hydit(
            ^^^^^^^^^^^
  File "H:\ComfyUI3\custom_nodes\ComfyUI_ExtraModels\HunYuanDiT\loader.py", line 64, in load_hydit
    from .models.models import HunYuanDiT
  File "H:\ComfyUI3\custom_nodes\ComfyUI_ExtraModels\HunYuanDiT\models\models.py", line 4, in <module>
    from timm.models.vision_transformer import Mlp
ModuleNotFoundError: No module named 'timm'

Starting up of ComfyUI

Activating virtual environment...
Running main.py with --auto-launch argument...
[START] Security scan
[DONE] Security scan
## ComfyUI-Manager: installing dependencies done.
** ComfyUI startup time: 2024-06-21 09:50:23.084016
** Platform: Windows
** Python version: 3.11.9 (tags/v3.11.9:de54cf5, Apr  2 2024, 10:12:12) [MSC v.1938 64 bit (AMD64)]
** Python executable: H:\ComfyUI3\venv\Scripts\python.exe
** Log path: H:\ComfyUI3\comfyui.log

Prestartup times for custom nodes:
   0.6 seconds: H:\ComfyUI3\custom_nodes\ComfyUI-Manager

Total VRAM 24560 MB, total RAM 97435 MB
Set vram state to: NORMAL_VRAM
Detected ZLUDA, support for it is experimental and comfy may not work properly.
Disabling cuDNN because ZLUDA does currently not support it.
Device: cuda:0 AMD Radeon RX 7900 XTX [ZLUDA] : native
VAE dtype: torch.bfloat16
Using pytorch cross attention

@cavalia88
Copy link
Author

I managed to overcome the protobuf missing error, but now have a numpy not available message. Documented the steps below just in case it might be useful to others.

Navigate to Your Project Directory:

python -m venv venv
venv\Scripts\activate
pip install protobuf
python -m pip show protobuf
deactivate

Numpy error below. Tried using same method as above to install numpy, but still doesn't work.

HYDiT: mT5 missing 0 keys (340 extra)
Requested to load EXM_HYDiT_Model
Loading 1 new model
  0%|                                                                                           | 0/20 [00:00<?, ?it/s]
!!! Exception during processing!!! Numpy is not available
Traceback (most recent call last):
  File "H:\Comfy_HunyuanDit\execution.py", line 151, in recursive_execute
    output_data, output_ui = get_output_data(obj, input_data_all)
                             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "H:\Comfy_HunyuanDit\execution.py", line 81, in get_output_data
    return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
                    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "H:\Comfy_HunyuanDit\execution.py", line 74, in map_node_over_list
    results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
                   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "H:\Comfy_HunyuanDit\nodes.py", line 1371, in sample
    return common_ksampler(model, seed, steps, cfg, sampler_name, scheduler, positive, negative, latent_image, denoise=denoise)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "H:\Comfy_HunyuanDit\nodes.py", line 1341, in common_ksampler
    samples = comfy.sample.sample(model, noise, steps, cfg, sampler_name, scheduler, positive, negative, latent_image,
              ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "H:\Comfy_HunyuanDit\comfy\sample.py", line 43, in sample
    samples = sampler.sample(noise, positive, negative, cfg=cfg, latent_image=latent_image, start_step=start_step, last_step=last_step, force_full_denoise=force_full_denoise, denoise_mask=noise_mask, sigmas=sigmas, callback=callback, disable_pbar=disable_pbar, seed=seed)
              ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "H:\Comfy_HunyuanDit\comfy\samplers.py", line 794, in sample
    return sample(self.model, noise, positive, negative, cfg, self.device, sampler, sigmas, self.model_options, latent_image=latent_image, denoise_mask=denoise_mask, callback=callback, disable_pbar=disable_pbar, seed=seed)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "H:\Comfy_HunyuanDit\comfy\samplers.py", line 696, in sample
    return cfg_guider.sample(noise, latent_image, sampler, sigmas, denoise_mask, callback, disable_pbar, seed)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "H:\Comfy_HunyuanDit\comfy\samplers.py", line 683, in sample
    output = self.inner_sample(noise, latent_image, device, sampler, sigmas, denoise_mask, callback, disable_pbar, seed)
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "H:\Comfy_HunyuanDit\comfy\samplers.py", line 662, in inner_sample
    samples = sampler.sample(self, sigmas, extra_args, callback, noise, latent_image, denoise_mask, disable_pbar)
              ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "H:\Comfy_HunyuanDit\comfy\samplers.py", line 567, in sample
    samples = self.sampler_function(model_k, noise, sigmas, extra_args=extra_args, callback=k_callback, disable=disable_pbar, **self.extra_options)
              ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "H:\Comfy_HunyuanDit\venv\Lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^
  File "H:\Comfy_HunyuanDit\comfy\k_diffusion\sampling.py", line 223, in sample_dpm_2
    denoised = model(x, sigma_hat * s_in, **extra_args)
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "H:\Comfy_HunyuanDit\comfy\samplers.py", line 291, in __call__
    out = self.inner_model(x, sigma, model_options=model_options, seed=seed)
          ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "H:\Comfy_HunyuanDit\comfy\samplers.py", line 649, in __call__
    return self.predict_noise(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "H:\Comfy_HunyuanDit\comfy\samplers.py", line 652, in predict_noise
    return sampling_function(self.inner_model, x, timestep, self.conds.get("negative", None), self.conds.get("positive", None), self.cfg, model_options=model_options, seed=seed)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "H:\Comfy_HunyuanDit\comfy\samplers.py", line 277, in sampling_function
    out = calc_cond_batch(model, conds, x, timestep, model_options)
          ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "H:\Comfy_HunyuanDit\comfy\samplers.py", line 226, in calc_cond_batch
    output = model.apply_model(input_x, timestep_, **c).chunk(batch_chunks)
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "H:\Comfy_HunyuanDit\comfy\model_base.py", line 113, in apply_model
    model_output = self.diffusion_model(xc, t, context=context, control=control, transformer_options=transformer_options, **extra_conds).float()
                   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "H:\Comfy_HunyuanDit\venv\Lib\site-packages\torch\nn\modules\module.py", line 1532, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "H:\Comfy_HunyuanDit\venv\Lib\site-packages\torch\nn\modules\module.py", line 1541, in _call_impl
    return forward_call(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "H:\Comfy_HunyuanDit\custom_nodes\ComfyUI_ExtraModels\HunYuanDiT\models\models.py", line 382, in forward
    rope = self.calc_rope(*image_size)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "H:\Comfy_HunyuanDit\custom_nodes\ComfyUI_ExtraModels\HunYuanDiT\models\models.py", line 356, in calc_rope
    rope = get_2d_rotary_pos_embed(self.head_size, *sub_args)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "H:\Comfy_HunyuanDit\custom_nodes\ComfyUI_ExtraModels\HunYuanDiT\models\posemb_layers.py", line 141, in get_2d_rotary_pos_embed
    pos_embed = get_2d_rotary_pos_embed_from_grid(embed_dim, grid, use_real=use_real)
                ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "H:\Comfy_HunyuanDit\custom_nodes\ComfyUI_ExtraModels\HunYuanDiT\models\posemb_layers.py", line 149, in get_2d_rotary_pos_embed_from_grid
    emb_h = get_1d_rotary_pos_embed(embed_dim // 2, grid[0].reshape(-1), use_real=use_real)  # (H*W, D/4)
            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "H:\Comfy_HunyuanDit\custom_nodes\ComfyUI_ExtraModels\HunYuanDiT\models\posemb_layers.py", line 183, in get_1d_rotary_pos_embed
    t = torch.from_numpy(pos).to(freqs.device)  # type: ignore  # [S]
        ^^^^^^^^^^^^^^^^^^^^^
RuntimeError: Numpy is not available

@cavalia88
Copy link
Author

cavalia88 commented Jun 21, 2024

After further trouble shooting with ChatGPT, finally managed to get it running. The issue was the version of NumPy used.

Downgraded from version 2.0.0 to 1.26.4 and it works now. Able to generate an image finally.

Run in venv 

pip uninstall numpy
pip install numpy==1.26.4
pip show numpy
Error message when running command:    python H:\Comfy_HunyuanDit\execution.py

A module that was compiled using NumPy 1.x cannot be run in
NumPy 2.0.0 as it may crash. To support both 1.x and 2.x
versions of NumPy, modules must be compiled with NumPy 2.0.
Some module may need to rebuild instead e.g. with 'pybind11>=2.12'.

If you are a user of the module, the easiest solution will be to
downgrade to 'numpy<2' or try to upgrade the affected module.
We expect that some modules will need time to support NumPy 2.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants