Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

I can't find the issue... And searching the error doesn't produce results. #105

Open
SencneS opened this issue Dec 4, 2024 · 5 comments

Comments

@SencneS
Copy link

SencneS commented Dec 4, 2024

The workflow worked, and I trained several LoRA's using it, tested some stuff. When a workflow works, I save a copy and archive it. Not even the archive with all the original settings that worked before works, produces the same error.

The main error it throws is :-
FluxTrainValidate
'FluxUpperLowerWrapper' object has no attribute 'prepare_block_swap_before_forward'

Here is the total error. This was from a fresh restart as well.
`

ComfyUI Error Report

Error Details

  • Node ID: 192
  • Node Type: FluxTrainValidate
  • Exception Type: AttributeError
  • Exception Message: 'FluxUpperLowerWrapper' object has no attribute 'prepare_block_swap_before_forward'

Stack Trace

  File "D:\AI\Art\ComfyUI\execution.py", line 323, in execute
    output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
                                           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

  File "D:\AI\Art\ComfyUI\execution.py", line 198, in get_output_data
    return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
                    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

  File "D:\AI\Art\ComfyUI\execution.py", line 169, in _map_node_over_list
    process_inputs(input_dict, i)

  File "D:\AI\Art\ComfyUI\execution.py", line 158, in process_inputs
    results.append(getattr(obj, func)(**inputs))
                   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^

  File "D:\AI\Art\ComfyUI\custom_nodes\ComfyUI-FluxTrainer\nodes.py", line 1200, in validate
    image_tensors = network_trainer.sample_images(*params)
                    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

  File "D:\AI\Art\ComfyUI\custom_nodes\ComfyUI-FluxTrainer\flux_train_network_comfy.py", line 310, in sample_images
    image_tensors = flux_train_utils.sample_images(
                    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

  File "D:\AI\Art\ComfyUI\custom_nodes\ComfyUI-FluxTrainer\library\flux_train_utils.py", line 89, in sample_images
    image_tensor = sample_image_inference(
                   ^^^^^^^^^^^^^^^^^^^^^^^

  File "D:\AI\Art\ComfyUI\custom_nodes\ComfyUI-FluxTrainer\library\flux_train_utils.py", line 223, in sample_image_inference
    x = denoise(flux, noise, img_ids, t5_out, txt_ids, l_pooled, timesteps=timesteps, guidance=scale, t5_attn_mask=t5_attn_mask)
        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

  File "D:\AI\Art\ComfyUI\custom_nodes\ComfyUI-FluxTrainer\library\flux_train_utils.py", line 310, in denoise
    model.prepare_block_swap_before_forward()
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

  File "D:\AI\Art\ComfyUI\penv\Lib\site-packages\torch\nn\modules\module.py", line 1931, in __getattr__
    raise AttributeError(

System Information

  • ComfyUI Version: v0.3.5-2-g497db62
  • Arguments: main.py
  • OS: nt
  • Python Version: 3.11.9 (tags/v3.11.9:de54cf5, Apr 2 2024, 10:12:12) [MSC v.1938 64 bit (AMD64)]
  • Embedded Python: false
  • PyTorch Version: 2.5.1+cu124

Devices

  • Name: cuda:0 NVIDIA GeForce RTX 4070 Ti SUPER : cudaMallocAsync
    • Type: cuda
    • VRAM Total: 17170956288
    • VRAM Free: 3543321016
    • Torch VRAM Total: 11744051200
    • Torch VRAM Free: 247073208

Logs

2024-12-04T15:02:20.709523 - Windows2024-12-04T15:02:20.709523 - 
2024-12-04T15:02:20.709523 - ** Python version:2024-12-04T15:02:20.709523 -  2024-12-04T15:02:20.709523 - 3.11.9 (tags/v3.11.9:de54cf5, Apr  2 2024, 10:12:12) [MSC v.1938 64 bit (AMD64)]2024-12-04T15:02:20.709523 - 
2024-12-04T15:02:20.709523 - ** Python executable:2024-12-04T15:02:20.709523 -  2024-12-04T15:02:20.709523 - D:\AI\Art\ComfyUI\penv\Scripts\python.exe2024-12-04T15:02:20.709523 - 
2024-12-04T15:02:20.709523 - ** ComfyUI Path:2024-12-04T15:02:20.709523 -  2024-12-04T15:02:20.709523 - D:\AI\Art\ComfyUI2024-12-04T15:02:20.710024 - 
2024-12-04T15:02:20.710024 - ** Log path:2024-12-04T15:02:20.710024 -  2024-12-04T15:02:20.710024 - D:\AI\Art\ComfyUI\comfyui.log2024-12-04T15:02:20.710024 - 
2024-12-04T15:02:21.864771 - 
Prestartup times for custom nodes:2024-12-04T15:02:21.864771 - 
2024-12-04T15:02:21.864771 -    0.0 seconds:2024-12-04T15:02:21.865274 -  2024-12-04T15:02:21.865274 - D:\AI\Art\ComfyUI\custom_nodes\rgthree-comfy2024-12-04T15:02:21.865274 - 
2024-12-04T15:02:21.865274 -    0.0 seconds:2024-12-04T15:02:21.865274 -  2024-12-04T15:02:21.866774 - D:\AI\Art\ComfyUI\custom_nodes\ComfyUI-Easy-Use2024-12-04T15:02:21.866774 - 
2024-12-04T15:02:21.866774 -    2.5 seconds:2024-12-04T15:02:21.866774 -  2024-12-04T15:02:21.866774 - D:\AI\Art\ComfyUI\custom_nodes\ComfyUI-Manager2024-12-04T15:02:21.866774 - 
2024-12-04T15:02:21.866774 - 
2024-12-04T15:02:23.353963 - Total VRAM 16376 MB, total RAM 130813 MB
2024-12-04T15:02:23.353963 - pytorch version: 2.5.1+cu124
2024-12-04T15:02:23.354966 - Set vram state to: NORMAL_VRAM
2024-12-04T15:02:23.355469 - Device: cuda:0 NVIDIA GeForce RTX 4070 Ti SUPER : cudaMallocAsync
2024-12-04T15:02:24.298672 - Using pytorch cross attention
2024-12-04T15:02:25.305135 - [Prompt Server] web root: D:\AI\Art\ComfyUI\web
2024-12-04T15:02:25.982189 - Note: NumExpr detected 28 cores but "NUMEXPR_MAX_THREADS" not set, so enforcing safe limit of 16.
2024-12-04T15:02:25.982689 - NumExpr defaulting to 16 threads.
2024-12-04T15:02:26.110470 - [Crystools �[0;32mINFO�[0m] Crystools version: 1.21.0
2024-12-04T15:02:26.125389 - [Crystools �[0;32mINFO�[0m] CPU: Intel(R) Core(TM) i7-14700K - Arch: AMD64 - OS: Windows 10
2024-12-04T15:02:26.133904 - [Crystools �[0;32mINFO�[0m] Pynvml (Nvidia) initialized.
2024-12-04T15:02:26.133904 - [Crystools �[0;32mINFO�[0m] GPU/s:
2024-12-04T15:02:26.147945 - [Crystools �[0;32mINFO�[0m] 0) NVIDIA GeForce RTX 4070 Ti SUPER
2024-12-04T15:02:26.148446 - [Crystools �[0;32mINFO�[0m] NVIDIA Driver: 560.94
2024-12-04T15:02:27.282194 - �[34m[ComfyUI-Easy-Use] server: �[0mv1.2.5 �[92mLoaded�[0m2024-12-04T15:02:27.282698 - 
2024-12-04T15:02:27.282698 - �[34m[ComfyUI-Easy-Use] web root: �[0mD:\AI\Art\ComfyUI\custom_nodes\ComfyUI-Easy-Use\web_version/v2 �[92mLoaded�[0m2024-12-04T15:02:27.282698 - 
2024-12-04T15:02:27.557548 - ### Loading: ComfyUI-Inspire-Pack (V1.9)2024-12-04T15:02:27.557548 - 
2024-12-04T15:02:27.603153 - Total VRAM 16376 MB, total RAM 130813 MB
2024-12-04T15:02:27.604231 - pytorch version: 2.5.1+cu124
2024-12-04T15:02:27.604676 - Set vram state to: NORMAL_VRAM
2024-12-04T15:02:27.605176 - Device: cuda:0 NVIDIA GeForce RTX 4070 Ti SUPER : cudaMallocAsync
2024-12-04T15:02:27.623846 - ### Loading: ComfyUI-Manager (V2.52.1)2024-12-04T15:02:27.623846 - 
2024-12-04T15:02:27.723765 - ### ComfyUI Revision: 2865 [497db621] | Released on '2024-11-26'2024-12-04T15:02:27.723765 - 
2024-12-04T15:02:28.002472 - [ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/alter-list.json2024-12-04T15:02:28.002972 - 
2024-12-04T15:02:28.011990 - [ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/model-list.json2024-12-04T15:02:28.012491 - 
2024-12-04T15:02:28.033528 - [ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/github-stats.json2024-12-04T15:02:28.034025 - 
2024-12-04T15:02:28.071102 - [ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/custom-node-list.json2024-12-04T15:02:28.071102 - 
2024-12-04T15:02:28.105702 - [ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/extension-node-map.json2024-12-04T15:02:28.106203 - 
2024-12-04T15:02:28.451737 - D:\AI\Art\ComfyUI2024-12-04T15:02:28.451737 - 
2024-12-04T15:02:28.451737 - ############################################2024-12-04T15:02:28.452237 - 
2024-12-04T15:02:28.452237 - D:\AI\Art\ComfyUI\custom_nodes\ComfyUI-NAI-styler\CSV2024-12-04T15:02:28.452237 - 
2024-12-04T15:02:28.452237 - ############################################2024-12-04T15:02:28.452237 - 
2024-12-04T15:02:28.452738 - []2024-12-04T15:02:28.452738 - 
2024-12-04T15:02:28.452738 - ############################################2024-12-04T15:02:28.452738 - 
2024-12-04T15:02:28.453740 - (pysssss:WD14Tagger) [DEBUG] Available ORT providers: TensorrtExecutionProvider, CUDAExecutionProvider, CPUExecutionProvider2024-12-04T15:02:28.453740 - 
2024-12-04T15:02:28.453740 - (pysssss:WD14Tagger) [DEBUG] Using ORT providers: CUDAExecutionProvider, CPUExecutionProvider2024-12-04T15:02:28.453740 - 
2024-12-04T15:02:28.480310 - ------------------------------------------2024-12-04T15:02:28.480310 - 
2024-12-04T15:02:28.480310 - �[34mComfyroll Studio v1.76 : �[92m 175 Nodes Loaded�[0m2024-12-04T15:02:28.480810 - 
2024-12-04T15:02:28.480810 - ------------------------------------------2024-12-04T15:02:28.480810 - 
2024-12-04T15:02:28.480810 - ** For changes, please see patch notes at https://github.com/Suzie1/ComfyUI_Comfyroll_CustomNodes/blob/main/Patch_Notes.md2024-12-04T15:02:28.480810 - 
2024-12-04T15:02:28.480810 - ** For help, please see the wiki at https://github.com/Suzie1/ComfyUI_Comfyroll_CustomNodes/wiki2024-12-04T15:02:28.480810 - 
2024-12-04T15:02:28.480810 - ------------------------------------------2024-12-04T15:02:28.480810 - 
2024-12-04T15:02:28.486823 - �[34mFizzleDorf Custom Nodes: �[92mLoaded�[0m2024-12-04T15:02:28.486823 - 
2024-12-04T15:02:29.109333 - �[92m[tinyterraNodes] �[32mLoaded�[0m2024-12-04T15:02:29.109333 - 
2024-12-04T15:02:29.419914 - All packages from requirements.txt are installed and up to date.2024-12-04T15:02:29.419914 - 
2024-12-04T15:02:29.421414 - llama-cpp installed2024-12-04T15:02:29.421414 - 
2024-12-04T15:02:29.421914 - All packages from requirements.txt are installed and up to date.2024-12-04T15:02:29.421914 - 
2024-12-04T15:02:29.536297 - 
2024-12-04T15:02:29.536297 - �[92m[rgthree-comfy] Loaded 42 fantastic nodes. 🎉�[00m2024-12-04T15:02:29.536297 - 
2024-12-04T15:02:29.536297 - 
2024-12-04T15:02:29.909866 - Searge-SDXL v4.3.1 in D:\AI\Art\ComfyUI\custom_nodes\SeargeSDXL2024-12-04T15:02:29.910366 - 
2024-12-04T15:02:29.929445 - 
Import times for custom nodes:
2024-12-04T15:02:29.929445 -    0.0 seconds: D:\AI\Art\ComfyUI\custom_nodes\websocket_image_save.py
2024-12-04T15:02:29.929445 -    0.0 seconds: D:\AI\Art\ComfyUI\custom_nodes\ComfyUI-Styles_CSV_Loader
2024-12-04T15:02:29.929445 -    0.0 seconds: D:\AI\Art\ComfyUI\custom_nodes\ComfyUI-CSV-Loader
2024-12-04T15:02:29.929445 -    0.0 seconds: D:\AI\Art\ComfyUI\custom_nodes\ComfyUI-WD14-Tagger
2024-12-04T15:02:29.929445 -    0.0 seconds: D:\AI\Art\ComfyUI\custom_nodes\ComfyUI-Universal-Styler
2024-12-04T15:02:29.929445 -    0.0 seconds: D:\AI\Art\ComfyUI\custom_nodes\ComfyUI-Image-Saver
2024-12-04T15:02:29.929945 -    0.0 seconds: D:\AI\Art\ComfyUI\custom_nodes\ComfyUI_FizzNodes
2024-12-04T15:02:29.929945 -    0.0 seconds: D:\AI\Art\ComfyUI\custom_nodes\ComfyUI-Custom-Scripts
2024-12-04T15:02:29.929945 -    0.0 seconds: D:\AI\Art\ComfyUI\custom_nodes\mikey_nodes
2024-12-04T15:02:29.929945 -    0.0 seconds: D:\AI\Art\ComfyUI\custom_nodes\python-interpreter-node
2024-12-04T15:02:29.929945 -    0.0 seconds: D:\AI\Art\ComfyUI\custom_nodes\ComfyUI-GGUF
2024-12-04T15:02:29.929945 -    0.0 seconds: D:\AI\Art\ComfyUI\custom_nodes\ComfyUI-ImageMetadataExtension
2024-12-04T15:02:29.929945 -    0.0 seconds: D:\AI\Art\ComfyUI\custom_nodes\ComfyUI-SaveImageWithMetaData
2024-12-04T15:02:29.929945 -    0.0 seconds: D:\AI\Art\ComfyUI\custom_nodes\rgthree-comfy
2024-12-04T15:02:29.929945 -    0.0 seconds: D:\AI\Art\ComfyUI\custom_nodes\ComfyUI-AnimateDiff-Evolved
2024-12-04T15:02:29.929945 -    0.0 seconds: D:\AI\Art\ComfyUI\custom_nodes\ComfyUI-KJNodes
2024-12-04T15:02:29.930444 -    0.0 seconds: D:\AI\Art\ComfyUI\custom_nodes\ComfyUI_Comfyroll_CustomNodes
2024-12-04T15:02:29.930444 -    0.0 seconds: D:\AI\Art\ComfyUI\custom_nodes\ComfyUI-Inspire-Pack
2024-12-04T15:02:29.930444 -    0.0 seconds: D:\AI\Art\ComfyUI\custom_nodes\comfyui-dream-project
2024-12-04T15:02:29.930444 -    0.0 seconds: D:\AI\Art\ComfyUI\custom_nodes\ComfyUI_Searge_LLM
2024-12-04T15:02:29.930444 -    0.1 seconds: D:\AI\Art\ComfyUI\custom_nodes\ComfyUI-FluxTrainer
2024-12-04T15:02:29.930444 -    0.1 seconds: D:\AI\Art\ComfyUI\custom_nodes\ComfyUI_tinyterraNodes
2024-12-04T15:02:29.930444 -    0.2 seconds: D:\AI\Art\ComfyUI\custom_nodes\ComfyUI-Florence2
2024-12-04T15:02:29.930444 -    0.2 seconds: D:\AI\Art\ComfyUI\custom_nodes\comfyui-tensorops
2024-12-04T15:02:29.930945 -    0.3 seconds: D:\AI\Art\ComfyUI\custom_nodes\comfyui-propost
2024-12-04T15:02:29.930945 -    0.3 seconds: D:\AI\Art\ComfyUI\custom_nodes\ComfyUI-Crystools
2024-12-04T15:02:29.930945 -    0.3 seconds: D:\AI\Art\ComfyUI\custom_nodes\ComfyUI-Manager
2024-12-04T15:02:29.930945 -    0.3 seconds: D:\AI\Art\ComfyUI\custom_nodes\ComfyUI_VLM_nodes
2024-12-04T15:02:29.931446 -    0.4 seconds: D:\AI\Art\ComfyUI\custom_nodes\SeargeSDXL
2024-12-04T15:02:29.931446 -    0.6 seconds: D:\AI\Art\ComfyUI\custom_nodes\ComfyUI_LayerStyle
2024-12-04T15:02:29.931446 -    1.1 seconds: D:\AI\Art\ComfyUI\custom_nodes\ComfyUI-Easy-Use
2024-12-04T15:02:29.931446 - 
2024-12-04T15:02:29.940081 - Starting server

2024-12-04T15:02:29.940081 - To see the GUI go to: http://127.0.0.1:8188
2024-12-04T15:02:55.119960 - FETCH DATA from: D:\AI\Art\ComfyUI\custom_nodes\ComfyUI-Manager\extension-node-map.json2024-12-04T15:02:55.120459 - 2024-12-04T15:02:55.129478 -  [DONE]2024-12-04T15:02:55.129478 - 
2024-12-04T15:02:56.163787 - [Inspire Pack] IPAdapterPlus is not installed.2024-12-04T15:02:56.163787 - 
2024-12-04T15:02:56.173806 - Error. No naistyles.csv found. Put your naistyles.csv in the custom_nodes/ComfyUI_NAI-mod/CSV directory of ComfyUI. Then press "Refresh".
                  Your current root directory is: D:\AI\Art\ComfyUI
            2024-12-04T15:02:56.173806 - 
2024-12-04T15:02:56.173806 - Error. No naistyles.csv found. Put your naistyles.csv in the custom_nodes/ComfyUI_NAI-mod/CSV directory of ComfyUI. Then press "Refresh".
                  Your current root directory is: D:\AI\Art\ComfyUI
            2024-12-04T15:02:56.173806 - 
2024-12-04T15:02:56.173806 - Error. No naistyles.csv found. Put your naistyles.csv in the custom_nodes/ComfyUI_NAI-mod/CSV directory of ComfyUI. Then press "Refresh".
                  Your current root directory is: D:\AI\Art\ComfyUI
            2024-12-04T15:02:56.173806 - 
2024-12-04T15:02:56.173806 - Error. No naistyles.csv found. Put your naistyles.csv in the custom_nodes/ComfyUI_NAI-mod/CSV directory of ComfyUI. Then press "Refresh".
                  Your current root directory is: D:\AI\Art\ComfyUI
            2024-12-04T15:02:56.173806 - 
2024-12-04T15:02:56.173806 - Error. No naistyles.csv found. Put your naistyles.csv in the custom_nodes/ComfyUI_NAI-mod/CSV directory of ComfyUI. Then press "Refresh".
                  Your current root directory is: D:\AI\Art\ComfyUI
            2024-12-04T15:02:56.174808 - 
2024-12-04T15:02:56.174808 - Error. No naistyles.csv found. Put your naistyles.csv in the custom_nodes/ComfyUI_NAI-mod/CSV directory of ComfyUI. Then press "Refresh".
                  Your current root directory is: D:\AI\Art\ComfyUI
            2024-12-04T15:02:56.174808 - 
2024-12-04T15:03:11.443209 - got prompt
2024-12-04T15:03:11.461238 - queue_counter: 02024-12-04T15:03:11.461739 - 
2024-12-04T15:03:11.900551 - {'steps': 20, 'width': 1024, 'height': 1024, 'guidance_scale': 3.5, 'seed': 133742069, 'shift': True, 'base_shift': 0.5, 'max_shift': 1.15}2024-12-04T15:03:11.900551 - 
2024-12-04T15:03:11.920110 - additional_args: num_train_epochs=20


2024-12-04T15:03:11.920609 - 
2024-12-04T15:03:11.942150 - highvram is enabled / highvramが有効です2024-12-04T15:03:11.942150 - 
2024-12-04T15:03:11.985808 - 2024-12-04 15:03:11 INFO     Checking the state dict: Diffusers or BFL, dev or schnell                  flux_utils.py:61
2024-12-04T15:03:11.987819 -                     INFO     t5xxl_max_token_length: 512                                 flux_train_network_comfy.py:149
2024-12-04T15:03:12.435753 - 2024-12-04 15:03:12 INFO     Loading dataset config from [[datasets]]                               train_network.py:288
                             resolution = [ 1024, 1024,]                                                                
                             batch_size = 1                                                                             
                             enable_bucket = true                                                                       
                             bucket_no_upscale = false                                                                  
                             min_bucket_reso = 512                                                                      
                             max_bucket_reso = 1024                                                                     
                             [[datasets.subsets]]                                                                       
                             image_dir = "Z:\\LoRA-Training\\LoRA-Train"                                                
                             class_tokens = "M4G3N"                                                                     
                             num_repeats = 1                                                                            
                                                                                                                        
                                                                                                                        
                             [general]                                                                                  
                             shuffle_caption = false                                                                    
                             caption_extension = ".txt"                                                                 
                             keep_tokens_separator = "|||"                                                              
                             caption_dropout_rate = 0.0                                                                 
                             color_aug = false                                                                          
                             flip_aug = false                                                                           
                                                                                                                        
2024-12-04T15:03:12.438759 -                     INFO     prepare images.                                                          train_util.py:1836
2024-12-04T15:03:12.443766 -                     INFO     get image size from name of cache files                                  train_util.py:1768
2024-12-04T15:03:12.445774 - 
  0%|                                                                                           | 0/46 [00:00<?, ?it/s]2024-12-04T15:03:12.459297 - 
100%|████████████████████████████████████████████████████████████████████████████████| 46/46 [00:00<00:00, 3401.59it/s]2024-12-04T15:03:12.459797 - 
2024-12-04T15:03:12.460677 -                     INFO     set image size from cache files: 46/46                                   train_util.py:1775
2024-12-04T15:03:12.460677 -                     INFO     found directory Z:\LoRA-Training\LoRA-Train contains 46 image files      train_util.py:1777
2024-12-04T15:03:12.466175 -                     INFO     Found captions for 46 images.                                            train_util.py:1807
2024-12-04T15:03:12.466859 -                     INFO     46 train images with repeating.                                          train_util.py:1877
2024-12-04T15:03:12.467864 -                     INFO     0 reg images.                                                            train_util.py:1880
2024-12-04T15:03:12.468370 -                     WARNING  no regularization images / 正則化画像が見つかりませんでした              train_util.py:1885
2024-12-04T15:03:12.470871 -                     INFO     [Dataset 0]                                                              config_util.py:570
                               batch_size: 1                                                                            
                               resolution: (1024, 1024)                                                                 
                               enable_bucket: True                                                                      
                               network_multiplier: 1.0                                                                  
                               min_bucket_reso: 512                                                                     
                               max_bucket_reso: 1024                                                                    
                               bucket_reso_steps: 64                                                                    
                               bucket_no_upscale: False                                                                 
                                                                                                                        
                               [Subset 0 of Dataset 0]                                                                  
                                 image_dir: "Z:\LoRA-Training\LoRA-Train"                                               
                                 image_count: 46                                                                        
                                 num_repeats: 1                                                                         
                                 shuffle_caption: False                                                                 
                                 keep_tokens: 0                                                                         
                                 keep_tokens_separator: |||                                                             
                                 caption_separator: ,                                                                   
                                 secondary_separator: None                                                              
                                 enable_wildcard: False                                                                 
                                 caption_dropout_rate: 0.0                                                              
                                 caption_dropout_every_n_epoches: 0                                                     
                                 caption_tag_dropout_rate: 0.0                                                          
                                 caption_prefix: None                                                                   
                                 caption_suffix: None                                                                   
                                 color_aug: False                                                                       
                                 flip_aug: False                                                                        
                                 face_crop_aug_range: None                                                              
                                 random_crop: False                                                                     
                                 token_warmup_min: 1,                                                                   
                                 token_warmup_step: 0.0,                                                                
                                 alpha_mask: False,                                                                     
                                 is_reg: False                                                                          
                                 class_tokens: M4G3N                                                                    
                                 caption_extension: .txt                                                                
                                                                                                                        
                                                                                                                        
2024-12-04T15:03:12.472373 -                     INFO     [Dataset 0]                                                              config_util.py:576
2024-12-04T15:03:12.473378 -                     INFO     loading image sizes.                                                      train_util.py:881
2024-12-04T15:03:12.473378 - 
  0%|                                                                                           | 0/46 [00:00<?, ?it/s]2024-12-04T15:03:12.473378 - 
100%|██████████████████████████████████████████████████████████████████████████████████████████| 46/46 [00:00<?, ?it/s]2024-12-04T15:03:12.473378 - 
2024-12-04T15:03:12.474364 -                     INFO     make buckets                                                              train_util.py:887
2024-12-04T15:03:12.475862 -                     INFO     number of images (including repeats) /                                    train_util.py:933
                             各bucketの画像枚数(繰り返し回数を含む)                                                   
2024-12-04T15:03:12.476367 -                     INFO     bucket 0: resolution (1024, 1024), count: 46                              train_util.py:938
2024-12-04T15:03:12.476867 -                     INFO     mean ar error (without repeats): 0.0                                      train_util.py:943
2024-12-04T15:03:12.480697 -                     INFO     preparing accelerator                                                  train_network.py:353
2024-12-04T15:03:12.518618 - accelerator device:2024-12-04T15:03:12.519119 -  2024-12-04T15:03:12.519119 - cuda2024-12-04T15:03:12.519119 - 
2024-12-04T15:03:12.519619 -                     INFO     Checking the state dict: Diffusers or BFL, dev or schnell                  flux_utils.py:61
2024-12-04T15:03:12.523585 -                     INFO     Building Flux model dev from BFL checkpoint                               flux_utils.py:118
2024-12-04T15:03:12.618918 -                     INFO     Loading state dict from D:\AI\Art\ComfyUI\models\unet\Flux.1              flux_utils.py:135
                             D\flux1-dev-FP32.safetensors                                                               
2024-12-04T15:03:12.674059 -                     INFO     Loaded Flux: <All keys matched successfully>                              flux_utils.py:153
2024-12-04T15:03:12.676067 -                     INFO     prepare split model                                          flux_train_network_comfy.py:98
2024-12-04T15:03:12.761794 -                     INFO     load state dict for lower                                   flux_train_network_comfy.py:105
2024-12-04T15:03:12.781962 -                     INFO     load state dict for upper                                   flux_train_network_comfy.py:110
2024-12-04T15:03:12.806519 -                     INFO     prepare upper model                                         flux_train_network_comfy.py:113
2024-12-04T15:03:20.479284 - 2024-12-04 15:03:20 INFO     split model prepared                                        flux_train_network_comfy.py:134
2024-12-04T15:03:20.481269 -                     INFO     Building CLIP-L                                                           flux_utils.py:179
2024-12-04T15:03:20.509417 -                     INFO     Loading state dict from                                                   flux_utils.py:275
                             D:\AI\Art\ComfyUI\models\clip\FLUX\clip_l.safetensors                                      
2024-12-04T15:03:20.515965 -                     INFO     Loaded CLIP-L: <All keys matched successfully>                            flux_utils.py:278
2024-12-04T15:03:20.573923 -                     INFO     Loading state dict from                                                   flux_utils.py:330
                             D:\AI\Art\ComfyUI\models\clip\t5\t5xxl_fp16.safetensors                                    
2024-12-04T15:03:20.583535 -                     INFO     Loaded T5xxl: <All keys matched successfully>                             flux_utils.py:333
2024-12-04T15:03:20.584798 -                     INFO     Building AutoEncoder                                                      flux_utils.py:160
2024-12-04T15:03:20.604915 -                     INFO     Loading state dict from                                                   flux_utils.py:165
                             D:\AI\Art\ComfyUI\models\vae\FLUX1\FLUX_ae.safetensors                                     
2024-12-04T15:03:20.613456 -                     INFO     Loaded AE: <All keys matched successfully>                                flux_utils.py:168
2024-12-04T15:03:20.613962 - import network module:2024-12-04T15:03:20.613962 -  2024-12-04T15:03:20.613962 - .networks.lora_flux2024-12-04T15:03:20.613962 - 
2024-12-04T15:03:20.753386 -                     INFO     [Dataset 0]                                                              train_util.py:2357
2024-12-04T15:03:20.753889 -                     INFO     caching latents with caching strategy.                                    train_util.py:989
2024-12-04T15:03:20.754903 -                     INFO     checking cache validity...                                                train_util.py:999
2024-12-04T15:03:20.754903 - 
  0%|                                                                                           | 0/46 [00:00<?, ?it/s]2024-12-04T15:03:20.758935 - 
100%|███████████████████████████████████████████████████████████████████████████████| 46/46 [00:00<00:00, 13024.03it/s]2024-12-04T15:03:20.758935 - 
2024-12-04T15:03:20.759435 -                     INFO     no latents to cache                                                      train_util.py:1039
2024-12-04T15:03:21.042026 - 2024-12-04 15:03:21 INFO     move vae and unet to cpu to save memory                     flux_train_network_comfy.py:202
2024-12-04T15:03:21.217774 -                     INFO     move text encoders to gpu                                   flux_train_network_comfy.py:210
2024-12-04T15:03:25.552194 - 2024-12-04 15:03:25 INFO     [Dataset 0]                                                              train_util.py:2378
2024-12-04T15:03:25.552702 -                     INFO     caching Text Encoder outputs with caching strategy.                      train_util.py:1132
2024-12-04T15:03:25.553785 -                     INFO     checking cache validity...                                               train_util.py:1138
2024-12-04T15:03:25.553785 - 
  0%|                                                                                           | 0/46 [00:00<?, ?it/s]2024-12-04T15:03:25.559836 - 
100%|████████████████████████████████████████████████████████████████████████████████| 46/46 [00:00<00:00, 7601.97it/s]2024-12-04T15:03:25.559836 - 
2024-12-04T15:03:25.560837 -                     INFO     no Text Encoder outputs to cache                                         train_util.py:1160
2024-12-04T15:03:25.562838 -                     INFO     cache Text Encoder outputs for sample prompt: ["An old      flux_train_network_comfy.py:226
                             style 1800's library with old furniture, a large amount of                                 
                             books on shelves with an assortment of artifacts on                                        
                             shelves, some paintings on the wall, and a lit fireplace.                                  
                             There is a large wall sized window looking out into a                                      
                             beautiful garden with a mountain background. In the middle                                 
                             of the room, a single red haired, green eyed woman in a                                    
                             white wedding dress is looking toward the viewer and facing                                
                             to the right. Her arms are out crossed. She has a seductive                                
                             look on her face, her cleavage is showing."]                                               
2024-12-04T15:03:25.564885 -                     INFO     cache Text Encoder outputs for prompt: An old style 1800's  flux_train_network_comfy.py:256
                             library with old furniture, a large amount of books on                                     
                             shelves with an assortment of artifacts on shelves, some                                   
                             paintings on the wall, and a lit fireplace. There is a                                     
                             large wall sized window looking out into a beautiful garden                                
                             with a mountain background. In the middle of the room, a                                   
                             single red haired, green eyed woman in a white wedding                                     
                             dress is looking toward the viewer and facing to the right.                                
                             Her arms are out crossed. She has a seductive look on her                                  
                             face, her cleavage is showing.                                                             
2024-12-04T15:03:25.805949 -                     INFO     cache Text Encoder outputs for prompt:                      flux_train_network_comfy.py:256
2024-12-04T15:03:25.919862 -                     INFO     move CLIP-L back to cpu                                     flux_train_network_comfy.py:267
2024-12-04T15:03:26.064677 - 2024-12-04 15:03:26 INFO     move t5XXL back to cpu                                      flux_train_network_comfy.py:269
2024-12-04T15:03:31.896834 - 2024-12-04 15:03:31 INFO     move vae and unet back to original device                   flux_train_network_comfy.py:274
2024-12-04T15:03:31.900839 -                     INFO     create LoRA network. base dim (rank): 16, alpha: 1.0                       lora_flux.py:491
2024-12-04T15:03:31.901847 -                     INFO     neuron dropout: p=None, rank dropout: p=None, module dropout: p=None       lora_flux.py:492
2024-12-04T15:03:31.901847 -                     INFO     train single blocks only                                                   lora_flux.py:502
2024-12-04T15:03:31.903860 -                     INFO     create LoRA for Text Encoder 1:                                            lora_flux.py:589
2024-12-04T15:03:31.935020 -                     INFO     create LoRA for Text Encoder 1: 72 modules.                                lora_flux.py:592
2024-12-04T15:03:32.282931 - 2024-12-04 15:03:32 INFO     create LoRA for FLUX single blocks: 114 modules.                           lora_flux.py:606
2024-12-04T15:03:32.284934 -                     INFO     enable LoRA for U-Net: 114 modules                                         lora_flux.py:755
2024-12-04T15:03:32.286441 - FLUX: Gradient checkpointing enabled.2024-12-04T15:03:32.286944 - 
2024-12-04T15:03:32.286944 - prepare optimizer, data loader etc.2024-12-04T15:03:32.286944 - 
2024-12-04T15:03:32.289954 -                     WARNING  learning rate is too low. If using D-Adaptation or Prodigy, set learning train_util.py:4527
                             rate around 1.0 /                                                                          
                             学習率が低すぎるようです。D-AdaptationまたはProdigyの使用時は1.0前後の値                   
                             を指定してください: lr=9.999999999999999e-05                                               
2024-12-04T15:03:32.291454 -                     WARNING  recommend option: lr=1.0 / 推奨は1.0です                                 train_util.py:4530
2024-12-04T15:03:32.293464 -                     INFO     use Prodigy optimizer | {}                                               train_util.py:4579
2024-12-04T15:03:37.791792 - running training2024-12-04T15:03:37.791792 - 
2024-12-04T15:03:37.792297 -   num train images * repeats: 462024-12-04T15:03:37.792297 - 
2024-12-04T15:03:37.792804 -   num reg images: 02024-12-04T15:03:37.792804 - 
2024-12-04T15:03:37.792804 -   num batches per epoch: 462024-12-04T15:03:37.792804 - 
2024-12-04T15:03:37.792804 -   num epochs: 552024-12-04T15:03:37.792804 - 
2024-12-04T15:03:37.792804 -   batch size per device: 12024-12-04T15:03:37.793750 - 
2024-12-04T15:03:37.793750 -   gradient accumulation steps: 12024-12-04T15:03:37.793750 - 
2024-12-04T15:03:37.793750 -   total optimization steps: 25002024-12-04T15:03:37.793750 - 
2024-12-04T15:03:56.284884 - 2024-12-04 15:03:56 INFO     text_encoder is not needed for training. deleting to save memory.     train_network.py:1044
2024-12-04T15:03:56.302694 -                     INFO     unet dtype: torch.bfloat16, device: cuda:0                            train_network.py:1061
2024-12-04T15:04:13.629335 - 
Epoch 1/55 - steps:   0%|                                          | 1/2500 [00:17<11:53:50, 17.14s/it, avr_loss=0.341]2024-12-04T15:04:13.922547 - 2024-12-04 15:04:13 INFO                                                                          flux_train_utils.py:44
2024-12-04T15:04:13.923047 -                     INFO     generating sample images at step: 1                                  flux_train_utils.py:45
2024-12-04T15:04:13.926051 -                     INFO     prompt: An old style 1800's library with old furniture, a large     flux_train_utils.py:169
                             amount of books on shelves with an assortment of artifacts on                              
                             shelves, some paintings on the wall, and a lit fireplace. There is                         
                             a large wall sized window looking out into a beautiful garden with                         
                             a mountain background. In the middle of the room, a single red                             
                             haired, green eyed woman in a white wedding dress is looking toward                        
                             the viewer and facing to the right. Her arms are out crossed. She                          
                             has a seductive look on her face, her cleavage is showing.                                 
2024-12-04T15:04:13.926051 -                     INFO     height: 1024                                                        flux_train_utils.py:171
2024-12-04T15:04:13.927556 -                     INFO     width: 1024                                                         flux_train_utils.py:172
2024-12-04T15:04:13.928558 -                     INFO     sample_steps: 20                                                    flux_train_utils.py:173
2024-12-04T15:04:13.930057 -                     INFO     scale: 3.5                                                          flux_train_utils.py:174
2024-12-04T15:04:13.931057 -                     INFO     seed: 133742069                                                     flux_train_utils.py:177
2024-12-04T15:04:13.931057 - Using cached text encoder outputs for prompt: An old style 1800's library with old furniture, a large amount of books on shelves with an assortment of artifacts on shelves, some paintings on the wall, and a lit fireplace. There is a large wall sized window looking out into a beautiful garden with a mountain background. In the middle of the room, a single red haired, green eyed woman in a white wedding dress is looking toward the viewer and facing to the right. Her arms are out crossed. She has a seductive look on her face, her cleavage is showing.2024-12-04T15:04:13.931057 - 
2024-12-04T15:04:13.932058 - 
2024-12-04T15:04:13.932058 - 
  0%|                                                                                           | 0/20 [00:00<?, ?it/s]2024-12-04T15:04:13.932557 - �[A2024-12-04T15:04:13.932557 - 
  0%|                                                                                           | 0/20 [00:00<?, ?it/s]2024-12-04T15:04:13.932557 - 
2024-12-04T15:04:13.933057 -                     ERROR    !!! Exception during processing !!! 'FluxUpperLowerWrapper' object has no  execution.py:392
                             attribute 'prepare_block_swap_before_forward'                                              
2024-12-04T15:04:13.937565 -                     ERROR    Traceback (most recent call last):                                         execution.py:393
                               File "D:\AI\Art\ComfyUI\execution.py", line 323, in execute                              
                                 output_data, output_ui, has_subgraph = get_output_data(obj,                            
                             input_data_all, execution_block_cb=execution_block_cb,                                     
                             pre_execute_cb=pre_execute_cb)                                                             
                                                                        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^                 
                             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^                 
                             ^                                                                                          
                               File "D:\AI\Art\ComfyUI\execution.py", line 198, in get_output_data                      
                                 return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION,                 
                             allow_interrupt=True, execution_block_cb=execution_block_cb,                               
                             pre_execute_cb=pre_execute_cb)                                                             
                                                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^                 
                             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^                 
                             ^^^^^^^^^^^^^^^^^^                                                                         
                               File "D:\AI\Art\ComfyUI\execution.py", line 169, in _map_node_over_list                  
                                 process_inputs(input_dict, i)                                                          
                               File "D:\AI\Art\ComfyUI\execution.py", line 158, in process_inputs                       
                                 results.append(getattr(obj, func)(**inputs))                                           
                                                ^^^^^^^^^^^^^^^^^^^^^^^^^^^^                                            
                               File "D:\AI\Art\ComfyUI\custom_nodes\ComfyUI-FluxTrainer\nodes.py", line                 
                             1200, in validate                                                                          
                                 image_tensors = network_trainer.sample_images(*params)                                 
                                                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^                                 
                               File                                                                                     
                             "D:\AI\Art\ComfyUI\custom_nodes\ComfyUI-FluxTrainer\flux_train_network_com                 
                             fy.py", line 310, in sample_images                                                         
                                 image_tensors = flux_train_utils.sample_images(                                        
                                                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^                                        
                               File                                                                                     
                             "D:\AI\Art\ComfyUI\custom_nodes\ComfyUI-FluxTrainer\library\flux_train_uti                 
                             ls.py", line 89, in sample_images                                                          
                                 image_tensor = sample_image_inference(                                                 
                                                ^^^^^^^^^^^^^^^^^^^^^^^                                                 
                               File                                                                                     
                             "D:\AI\Art\ComfyUI\custom_nodes\ComfyUI-FluxTrainer\library\flux_train_uti                 
                             ls.py", line 223, in sample_image_inference                                                
                                 x = denoise(flux, noise, img_ids, t5_out, txt_ids, l_pooled,                           
                             timesteps=timesteps, guidance=scale, t5_attn_mask=t5_attn_mask)                            
                                     ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^                 
                             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^                                     
                               File                                                                                     
                             "D:\AI\Art\ComfyUI\custom_nodes\ComfyUI-FluxTrainer\library\flux_train_uti                 
                             ls.py", line 310, in denoise                                                               
                                 model.prepare_block_swap_before_forward()                                              
                                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^                                                
                               File                                                                                     
                             "D:\AI\Art\ComfyUI\penv\Lib\site-packages\torch\nn\modules\module.py",                     
                             line 1931, in __getattr__                                                                  
                                 raise AttributeError(                                                                  
                             AttributeError: 'FluxUpperLowerWrapper' object has no attribute                            
                             'prepare_block_swap_before_forward'                                                        
                                                                                                                        
2024-12-04T15:04:13.939068 -                     INFO     Prompt executed in 62.48 seconds                                                main.py:139

Attached Workflow

Please make sure that workflow does not contain any sensitive information such as API keys or passwords.

Workflow too large. Please manually upload the workflow from local file system.

Additional Context

(Please add any additional context or steps to reproduce the error here)
`

@picobyte
Copy link

picobyte commented Dec 5, 2024

I seem to get a similar error:

enable fp8 training for U-Net.                                                                                                                                                                                                  20:10:49 [0/1852]
unet_weight_dtype: torch.float8_e4m3fn                                                                                                                                                                                                           
running training                                                                                                                                                                                                                                 
  num train images * repeats: 23370                                                                                                                                                                                                              
  num reg images: 0                                                                                                                                                                                                                              
  num batches per epoch: 11685                              
  num epochs: 1                                                                                                                                                                                                                                  
  batch size per device: 2                                                                                                                                                                                                                       
  gradient accumulation steps: 1                                                                                                                                                                                                                 
  total optimization steps: 1000                                                                                                                                                                                                                 
2024-12-05 19:13:41 INFO     text_encoder is not needed for training. deleting to save memory.                                                   train_network.py:1044                                                                           
                    INFO     unet dtype: torch.float8_e4m3fn, device: cuda:0                                                                     train_network.py:1061                                                                           
Epoch 1/1 - steps:  25%|██████████████████████▎                                                                  | 250/1000 [57:06<2:51:19, 13.71s/it, avr_loss=0.355]GPU Temperature: 38                                                        
                                                                                                                                                                                                                                                 
saving checkpoint: ../training/output/flux_lora_file_name_rank16_bf16-step00250.safetensors                                                                                                                                                      
2024-12-05 20:10:49 INFO                                                                                                                        flux_train_utils.py:44                                                                           
                    INFO     generating sample images at step: 250                                                                              flux_train_utils.py:45                                                                           
                    INFO     prompt: portrait of a woman                                                                                       flux_train_utils.py:169                                                                           
                    INFO     height: 1024                                                                                                      flux_train_utils.py:171                                                                           
                    INFO     width: 1024                                                                                                       flux_train_utils.py:172                                                                           
                    INFO     sample_steps: 20                                                                                                  flux_train_utils.py:173                                                                           
                    INFO     scale: 3.0                                                                                                        flux_train_utils.py:174                                                                           
                    INFO     seed: 42                                                                                                          flux_train_utils.py:177                                                                           
Using cached text encoder outputs for prompt: portrait of a woman                                                                                                                                                                                
  0%|                                                                                                                                          | 0/20 [00:00<?, ?it/s]                                                                           
                    ERROR    !!! Exception during processing !!! 'FluxUpperLowerWrapper' object has no attribute 'prepare_block_swap_before_forward'  execution.py:393                                                                           
                    ERROR    Traceback (most recent call last):                                                                                       execution.py:394                                                                           
                               File "~/git/ComfyUI/execution.py", line 324, in execute                                                                                                                                    
                                 output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb,                                                                                              
                             pre_execute_cb=pre_execute_cb)                                                                                                                                                                                      
                                                                        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^                                                                                            
                             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^                                                                                                                                                                                       
                               File "~/git/ComfyUI/execution.py", line 199, in get_output_data                                                                                                                            
                                 return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True,                                                                                                                    
                             execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)                                                                                                                                               
                                                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^                                                                                            
                             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^                                                                                                                                                                      
                               File "~/git/ComfyUI/execution.py", line 170, in _map_node_over_list                                                                                                                        
                                 process_inputs(input_dict, i)                                                                                                                                                                                   
                               File "~/git/ComfyUI/execution.py", line 159, in process_inputs                                                                                                                             
                                 results.append(getattr(obj, func)(**inputs))                                                                                                                                                                    
                                                ^^^^^^^^^^^^^^^^^^^^^^^^^^^^                                            
                               File "~/git/ComfyUI/custom_nodes/ComfyUI-FluxTrainer/nodes.py", line 1200, in validate                                                                                                     
                                 image_tensors = network_trainer.sample_images(*params)                                                                                                                                                          
                                                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^                                                                                                                                                          
                               File "~/git/ComfyUI/custom_nodes/ComfyUI-FluxTrainer/flux_train_network_comfy.py", line 310, in                                                                                            
                             sample_images                  
                                 image_tensors = flux_train_utils.sample_images(                                                                                                                                                                 
                                                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^                                                                                                                                                                 
                               File "~/git/ComfyUI/custom_nodes/ComfyUI-FluxTrainer/library/flux_train_utils.py", line 89, in                                                                                             
                             sample_images                                                                                                                                                                                                       
                                 image_tensor = sample_image_inference(                                                 
                                                ^^^^^^^^^^^^^^^^^^^^^^^                                                                                                                                                                          
                               File "~/git/ComfyUI/custom_nodes/ComfyUI-FluxTrainer/library/flux_train_utils.py", line 223, in                                                                                            
                             sample_image_inference                                                                                                                                                                                              
                                 x = denoise(flux, noise, img_ids, t5_out, txt_ids, l_pooled, timesteps=timesteps, guidance=scale,                                                                                                               
                             t5_attn_mask=t5_attn_mask)                                                                                                                                                                                          
                                     ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^                                                                                            
                             ^^^^^^^^                                                                                                                                                                                                            
                               File "~/git/ComfyUI/custom_nodes/ComfyUI-FluxTrainer/library/flux_train_utils.py", line 310, in                                                                                            
                             denoise                                                                                                                                                                                                             
                                 model.prepare_block_swap_before_forward()                                                                                                                                                                       
                                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^                                                                                                                                                                         
                               File "~/git/ComfyUI/venv3124cu124/lib/python3.12/site-packages/torch/nn/modules/module.py",                                                                                                
                             line 1729, in __getattr__      
                                 raise AttributeError(f"'{type(self).__name__}' object has no attribute '{name}'")      
                             AttributeError: 'FluxUpperLowerWrapper' object has no attribute 'prepare_block_swap_before_forward'                                                                                                                 
                                                            
                    INFO     Prompt executed in 5226.57 seconds                                                                                            main.py:139  

relevant code
searching for the culprit
so the class class Flux(nn.Module) has the prepare_block_swap_before_forward function, but it looks like a different model type is hgiven as 'wrapper' to flux_train_utils.sample_images(). It could be that we have loaded a wrong model type and pass it to the trainer. Then maybe an early check is required, or the model type that we actually have in wrapper should gain this function too. not sure yet. Maybe first try to error out early printing what we are holding.

@SencneS
Copy link
Author

SencneS commented Dec 5, 2024

It is important to note that this happens on the "Flux Train Validate" node. You can successfully train a LoRA by over-simplifying the workflow, removing the validation nodes. Something like this (5 epoch) just to show an example.

image

@picobyte
Copy link

picobyte commented Dec 5, 2024

It seems that if you set split_mode to false you do not get to this code path. split_mode is on by default, but if you hover over the option it is indicated as EXPERIMENTAL. This resolves the issue for me.

The hoover also describes a required networks arg: 'train_blocks=single'. But the config 'sanity check' indicates networks_arg appears automatically assigned with the mentioned when split_mode=true. so this seems unrelated to the error. (setting this in extra_optimizer_args or in additional_args fields: neither seem to resolve the issue, only disabling split_mode does).

from the guide:
In the Training settings it is important to turn on "split_mode" if you have less than 16GB Vram. Remember to set your LoRA's name (Output name) and the output directory where the LoRA's will be saved.

@picobyte
Copy link

picobyte commented Dec 5, 2024

Not sure if this is less memory efficient, but to trigger the error much earlier you can apply this patch

diff --git a/flux_train_network_comfy.py b/flux_train_network_comfy.py
index 87638d1..ad4863e 100644
--- a/flux_train_network_comfy.py
+++ b/flux_train_network_comfy.py
@@ -68,6 +68,7 @@ class FluxNetworkTrainer(NetworkTrainer):
 
         if args.split_mode:
             model = self.prepare_split_model(model, args, weight_dtype, accelerator)
+            self.wrapper = self.prepare_wrapper(accelerator, model)
 
         clip_l = flux_utils.load_clip_l(args.clip_l, weight_dtype, "cpu", disable_mmap=args.disable_mmap_load_safetensors)
         clip_l.eval()
@@ -135,6 +136,34 @@ class FluxNetworkTrainer(NetworkTrainer):
 
         return flux_lower
 
+    def prepare_wrapper(self, accelerator, flux_lower):
+        class FluxUpperLowerWrapper(torch.nn.Module):
+            def __init__(self, flux_upper: flux_models.FluxUpper, flux_lower: flux_models.FluxLower, device: torch.device):
+                super().__init__()
+                self.flux_upper = flux_upper
+                self.flux_lower = flux_lower
+                self.target_device = device
+
+            def forward(self, img, img_ids, txt, txt_ids, timesteps, y, guidance=None, txt_attention_mask=None):
+                self.flux_lower.to("cpu")
+                clean_memory_on_device(self.target_device)
+                self.flux_upper.to(self.target_device)
+                img, txt, vec, pe = self.flux_upper(img, img_ids, txt, txt_ids, timesteps, y, guidance, txt_attention_mask)
+                self.flux_upper.to("cpu")
+                clean_memory_on_device(self.target_device)
+                self.flux_lower.to(self.target_device)
+                return self.flux_lower(img, txt, vec, pe, txt_attention_mask)
+
+        wrapper = FluxUpperLowerWrapper(self.flux_upper, flux_lower, accelerator.device)
+        if not getattr(wrapper, "prepare_block_swap_before_forward", None):
+            logger.warn ("not a  as opposed to class Flux(nn.Module) class?")
+            logger.warn(repr(wrapper))
+            logger.warn(wrapper.__class__)
+            raise ValueError("wrapper has no attribute prepare_block_swap_before_forward")
+
+        logger.info("wrapper prepared")
+        return wrapper
+
     def get_tokenize_strategy(self, args):
         _, is_schnell, _, _ = flux_utils.analyze_checkpoint_state(args.pretrained_model_name_or_path)
 
@@ -287,25 +316,9 @@ class FluxNetworkTrainer(NetworkTrainer):
             accelerator, args, epoch, global_step, flux, ae, text_encoders, sample_prompts_te_outputs, validation_settings)
             clean_memory_on_device(accelerator.device)
             return image_tensors
-        
-        class FluxUpperLowerWrapper(torch.nn.Module):
-            def __init__(self, flux_upper: flux_models.FluxUpper, flux_lower: flux_models.FluxLower, device: torch.device):
-                super().__init__()
-                self.flux_upper = flux_upper
-                self.flux_lower = flux_lower
-                self.target_device = device
-
-            def forward(self, img, img_ids, txt, txt_ids, timesteps, y, guidance=None, txt_attention_mask=None):
-                self.flux_lower.to("cpu")
-                clean_memory_on_device(self.target_device)
-                self.flux_upper.to(self.target_device)
-                img, txt, vec, pe = self.flux_upper(img, img_ids, txt, txt_ids, timesteps, y, guidance, txt_attention_mask)
-                self.flux_upper.to("cpu")
-                clean_memory_on_device(self.target_device)
-                self.flux_lower.to(self.target_device)
-                return self.flux_lower(img, txt, vec, pe, txt_attention_mask)
 
-        wrapper = FluxUpperLowerWrapper(self.flux_upper, flux, accelerator.device)
+        wrapper = self.wrapper
+        wrapper.flux_upper.training
         clean_memory_on_device(accelerator.device)
         image_tensors = flux_train_utils.sample_images(
             accelerator, args, epoch, global_step, wrapper, ae, text_encoders, sample_prompts_te_outputs, validation_settings
@@ -511,4 +524,4 @@ if __name__ == "__main__":
     args = train_util.read_config_from_file(args, parser)
 
     trainer = FluxNetworkTrainer()
-    trainer.train(args)
\ No newline at end of file
+    trainer.train(args)

@picobyte
Copy link

picobyte commented Dec 5, 2024

Possibly the fix (instead of the other patch, which is for debugging) is just:

diff --git a/library/flux_train_utils.py b/library/flux_train_utils.py
index 6f6c1f2..8a4b461 100644
--- a/library/flux_train_utils.py
+++ b/library/flux_train_utils.py
@@ -307,7 +307,8 @@ def denoise(
     comfy_pbar = ProgressBar(total=len(timesteps))
     for t_curr, t_prev in zip(tqdm(timesteps[:-1]), timesteps[1:]):
         t_vec = torch.full((img.shape[0],), t_curr, dtype=img.dtype, device=img.device)
-        model.prepare_block_swap_before_forward()
+        if hasattr(model, "prepare_block_swap_before_forward"):
+            model.prepare_block_swap_before_forward()
         pred = model(
             img=img,
             img_ids=img_ids,
@@ -321,7 +322,8 @@ def denoise(
 
         img = img + (t_prev - t_curr) * pred
         comfy_pbar.update(1)
-    model.prepare_block_swap_before_forward()
+    if hasattr(model, "prepare_block_swap_before_forward"):
+        model.prepare_block_swap_before_forward()
     return img
 
 # endregion
@@ -611,4 +613,4 @@ def add_flux_train_arguments(parser: argparse.ArgumentParser):
         type=float,
         default=3.0,
         help="Discrete flow shift for the Euler Discrete Scheduler, default is 3.0. / Euler Discrete Schedulerの離散フローシフト、デフォルトは3.0。",
-    )
\ No newline at end of file
+    )

The comment here seems to suggest prepare_block_swap_before_forward is just an optimization, and therefore likely only implemented in class Flux(nn.Module) where it is really required. If we have a different class, hopefully we can just skip this (as of yet untested, I have just set split_mode to false, this time)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants