Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

SadTalker Tab missing on Stable Diffusion Forge (automatic1111) installed on an M1 #830

Open
pete-burgess opened this issue Mar 13, 2024 · 3 comments

Comments

@pete-burgess
Copy link

I went through the installation process, and everything seemed to have been installed correctly. I ran the download_models script, and I am sure the checkpoints were put at the proper place.

When I reload the UI (or relaunch Forge), there is no SadTalker tab. I also noticed the error code below and assume the reason is there somewhere. I've noticed similar issues submitted elsewhere, but I'm not finding a fix. I'm working on an M1 Sonoma 14.4. I also, for the life of me, cannot update to torch 2.1.2, even after running ./webui.sh --reinstall-torch. I'm not sure if that is part of the issue, too.

Any help would be appreciated!

Screenshot 2024-03-12 at 7 46 51 PM
################################################################
Install script for stable-diffusion + Web UI
Tested on Debian 11 (Bullseye), Fedora 34+ and openSUSE Leap 15.4 or newer.
################################################################

################################################################
Running on HIDDEN-NAME user
################################################################

################################################################
Repo already cloned, using it as install directory
################################################################

################################################################
Create and activate python venv
################################################################

################################################################
Launching launch.py...
################################################################
Python 3.10.6 (v3.10.6:9c7b4bd164, Aug  1 2022, 17:13:48) [Clang 13.0.0 (clang-1300.0.29.30)]
Version: f0.0.17v1.8.0rc-latest-276-g29be1da7
Commit hash: 29be1da7cf2b5dccfc70fbdd33eb35c56a31ffb7
Launching Web UI with arguments: --skip-torch-cuda-test --upcast-sampling --no-half-vae --use-cpu interrogate
Total VRAM 16384 MB, total RAM 16384 MB
Set vram state to: SHARED
Device: mps
VAE dtype: torch.float32
CUDA Stream Activated:  False
Warning: caught exception 'Torch not compiled with CUDA enabled', memory monitor disabled
Using sub quadratic optimization for cross attention, if you have memory or speed issues try using: --attention-split
==============================================================================
You are running torch 2.1.0.
The program is tested to work with torch 2.1.2.
To reinstall the desired version, run with commandline flag --reinstall-torch.
Beware that this will cause a lot of large files to be downloaded, as well as
there are reports of issues with training tab on the latest version.

Use --skip-version-check commandline argument to disable this check.
==============================================================================
ControlNet preprocessor location: /Users/HIDDEN-NAME/stable-diffusion-webui-forge/models/ControlNetPreprocessor
Loading weights [a6b4c0392d] from /Users/HIDDEN-NAME/stable-diffusion-webui-forge/models/Stable-diffusion/rcnzCartoon3d_v10.safetensors
2024-03-12 19:42:28,357 - ControlNet - INFO - ControlNet UI callback registered.
load Sadtalker Checkpoints from /Users/HIDDEN-NAME/stable-diffusion-webui-forge/extensions/SadTalker/checkpoints/
model_type EPS
UNet ADM Dimension 0
*** Error executing callback ui_tabs_callback for /Users/HIDDEN-NAME/stable-diffusion-webui-forge/extensions/SadTalker/scripts/extension.py
    Traceback (most recent call last):
      File "/Users/HIDDEN-NAME/stable-diffusion-webui-forge/modules/script_callbacks.py", line 183, in ui_tabs_callback
        res += c.callback() or []
      File "/Users/HIDDEN-NAME/stable-diffusion-webui-forge/extensions/SadTalker/scripts/extension.py", line 172, in on_ui_tabs
        from app_sadtalker import sadtalker_demo
      File "/Users/HIDDEN-NAME/stable-diffusion-webui-forge/extensions/SadTalker/app_sadtalker.py", line 3, in <module>
        from src.gradio_demo import SadTalker
      File "/Users/HIDDEN-NAME/stable-diffusion-webui-forge/extensions/SadTalker/src/gradio_demo.py", line 6, in <module>
        from src.generate_batch import get_data
      File "/Users/HIDDEN-NAME/stable-diffusion-webui-forge/extensions/SadTalker/src/generate_batch.py", line 8, in <module>
        import src.utils.audio as audio
      File "/Users/HIDDEN-NAME/stable-diffusion-webui-forge/extensions/SadTalker/src/utils/audio.py", line 1, in <module>
        import librosa
      File "/Users/HIDDEN-NAME/stable-diffusion-webui-forge/venv/lib/python3.10/site-packages/librosa/__init__.py", line 211, in <module>
        from . import core
      File "/Users/HIDDEN-NAME/stable-diffusion-webui-forge/venv/lib/python3.10/site-packages/librosa/core/__init__.py", line 9, in <module>
        from .constantq import *  # pylint: disable=wildcard-import
      File "/Users/HIDDEN-NAME/stable-diffusion-webui-forge/venv/lib/python3.10/site-packages/librosa/core/constantq.py", line 1058, in <module>
        dtype=np.complex,
      File "/Users/HIDDEN-NAME/stable-diffusion-webui-forge/venv/lib/python3.10/site-packages/numpy/__init__.py", line 324, in __getattr__
        raise AttributeError(__former_attrs__[attr])
    AttributeError: module 'numpy' has no attribute 'complex'.
    `np.complex` was a deprecated alias for the builtin `complex`. To avoid this error in existing code, use `complex` by itself. Doing this will not modify any behavior and is safe. If you specifically wanted the numpy scalar type, use `np.complex128` here.
    The aliases was originally deprecated in NumPy 1.20; for more details and guidance see the original release note at:
        https://numpy.org/devdocs/release/1.20.0-notes.html#deprecations

---
Running on local URL:  http://127.0.0.1:7860

To create a public link, set `share=True` in `launch()`.
Startup time: 33.2s (prepare environment: 0.6s, import torch: 6.8s, import gradio: 1.7s, setup paths: 16.2s, other imports: 3.1s, load scripts: 1.9s, create ui: 1.9s, gradio launch: 0.8s).
Using split attention in VAE
Working with z of shape (1, 4, 32, 32) = 4096 dimensions.
Using split attention in VAE
extra {'cond_stage_model.clip_l.logit_scale', 'cond_stage_model.clip_l.text_projection'}
left over keys: dict_keys(['model_ema.decay', 'model_ema.num_updates'])
To load target model SD1ClipModel
Begin to load 1 model
Moving model(s) has taken 0.00 seconds
Model loaded in 7.0s (load weights from disk: 0.3s, forge load real models: 6.1s, calculate empty prompt: 0.6s).
@yukiarimo
Copy link

Same issue

@AppStolz
Copy link

I went through the installation process, and everything seemed to have been installed correctly. I ran the download_models script, and I am sure the checkpoints were put at the proper place.

When I reload the UI (or relaunch Forge), there is no SadTalker tab. I also noticed the error code below and assume the reason is there somewhere. I've noticed similar issues submitted elsewhere, but I'm not finding a fix. I'm working on an M1 Sonoma 14.4. I also, for the life of me, cannot update to torch 2.1.2, even after running ./webui.sh --reinstall-torch. I'm not sure if that is part of the issue, too.

Any help would be appreciated!

Screenshot 2024-03-12 at 7 46 51 PM ```shell ################################################################ Install script for stable-diffusion + Web UI Tested on Debian 11 (Bullseye), Fedora 34+ and openSUSE Leap 15.4 or newer. ################################################################

################################################################
Running on HIDDEN-NAME user
################################################################

################################################################
Repo already cloned, using it as install directory
################################################################

################################################################
Create and activate python venv
################################################################

################################################################
Launching launch.py...
################################################################
Python 3.10.6 (v3.10.6:9c7b4bd164, Aug 1 2022, 17:13:48) [Clang 13.0.0 (clang-1300.0.29.30)]
Version: f0.0.17v1.8.0rc-latest-276-g29be1da7
Commit hash: 29be1da7cf2b5dccfc70fbdd33eb35c56a31ffb7
Launching Web UI with arguments: --skip-torch-cuda-test --upcast-sampling --no-half-vae --use-cpu interrogate
Total VRAM 16384 MB, total RAM 16384 MB
Set vram state to: SHARED
Device: mps
VAE dtype: torch.float32
CUDA Stream Activated: False
Warning: caught exception 'Torch not compiled with CUDA enabled', memory monitor disabled
Using sub quadratic optimization for cross attention, if you have memory or speed issues try using: --attention-split

You are running torch 2.1.0.
The program is tested to work with torch 2.1.2.
To reinstall the desired version, run with commandline flag --reinstall-torch.
Beware that this will cause a lot of large files to be downloaded, as well as
there are reports of issues with training tab on the latest version.

Use --skip-version-check commandline argument to disable this check.

ControlNet preprocessor location: /Users/HIDDEN-NAME/stable-diffusion-webui-forge/models/ControlNetPreprocessor
Loading weights [a6b4c0392d] from /Users/HIDDEN-NAME/stable-diffusion-webui-forge/models/Stable-diffusion/rcnzCartoon3d_v10.safetensors
2024-03-12 19:42:28,357 - ControlNet - INFO - ControlNet UI callback registered.
load Sadtalker Checkpoints from /Users/HIDDEN-NAME/stable-diffusion-webui-forge/extensions/SadTalker/checkpoints/
model_type EPS
UNet ADM Dimension 0
*** Error executing callback ui_tabs_callback for /Users/HIDDEN-NAME/stable-diffusion-webui-forge/extensions/SadTalker/scripts/extension.py
Traceback (most recent call last):
File "/Users/HIDDEN-NAME/stable-diffusion-webui-forge/modules/script_callbacks.py", line 183, in ui_tabs_callback
res += c.callback() or []
File "/Users/HIDDEN-NAME/stable-diffusion-webui-forge/extensions/SadTalker/scripts/extension.py", line 172, in on_ui_tabs
from app_sadtalker import sadtalker_demo
File "/Users/HIDDEN-NAME/stable-diffusion-webui-forge/extensions/SadTalker/app_sadtalker.py", line 3, in
from src.gradio_demo import SadTalker
File "/Users/HIDDEN-NAME/stable-diffusion-webui-forge/extensions/SadTalker/src/gradio_demo.py", line 6, in
from src.generate_batch import get_data
File "/Users/HIDDEN-NAME/stable-diffusion-webui-forge/extensions/SadTalker/src/generate_batch.py", line 8, in
import src.utils.audio as audio
File "/Users/HIDDEN-NAME/stable-diffusion-webui-forge/extensions/SadTalker/src/utils/audio.py", line 1, in
import librosa
File "/Users/HIDDEN-NAME/stable-diffusion-webui-forge/venv/lib/python3.10/site-packages/librosa/init.py", line 211, in
from . import core
File "/Users/HIDDEN-NAME/stable-diffusion-webui-forge/venv/lib/python3.10/site-packages/librosa/core/init.py", line 9, in
from .constantq import * # pylint: disable=wildcard-import
File "/Users/HIDDEN-NAME/stable-diffusion-webui-forge/venv/lib/python3.10/site-packages/librosa/core/constantq.py", line 1058, in
dtype=np.complex,
File "/Users/HIDDEN-NAME/stable-diffusion-webui-forge/venv/lib/python3.10/site-packages/numpy/init.py", line 324, in getattr
raise AttributeError(former_attrs[attr])
AttributeError: module 'numpy' has no attribute 'complex'.
np.complex was a deprecated alias for the builtin complex. To avoid this error in existing code, use complex by itself. Doing this will not modify any behavior and is safe. If you specifically wanted the numpy scalar type, use np.complex128 here.
The aliases was originally deprecated in NumPy 1.20; for more details and guidance see the original release note at:
https://numpy.org/devdocs/release/1.20.0-notes.html#deprecations


Running on local URL: http://127.0.0.1:7860

To create a public link, set share=True in launch().
Startup time: 33.2s (prepare environment: 0.6s, import torch: 6.8s, import gradio: 1.7s, setup paths: 16.2s, other imports: 3.1s, load scripts: 1.9s, create ui: 1.9s, gradio launch: 0.8s).
Using split attention in VAE
Working with z of shape (1, 4, 32, 32) = 4096 dimensions.
Using split attention in VAE
extra {'cond_stage_model.clip_l.logit_scale', 'cond_stage_model.clip_l.text_projection'}
left over keys: dict_keys(['model_ema.decay', 'model_ema.num_updates'])
To load target model SD1ClipModel
Begin to load 1 model
Moving model(s) has taken 0.00 seconds
Model loaded in 7.0s (load weights from disk: 0.3s, forge load real models: 6.1s, calculate empty prompt: 0.6s).

Take a look here at a solution that works 100%
#822 (comment)

@pete-burgess
Copy link
Author

Thank you. It still doesn't work for me but I am all good. Just have to move on for now.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants