Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

where to put gemma loader's model , huggingface cache and in model not work, #104

Open
xueqing0622 opened this issue Dec 17, 2024 · 0 comments

Comments

@xueqing0622
Copy link

use this brach: https://github.com/Efficient-Large-Model/ComfyUI_ExtraModels/
where to put gemma loader's model , huggingface cache and in model not work,
I:\cache\huggingface\hub\models--unsloth--gemma-2-2b-it-bnb-4bit
F:\ComfyUI\ComfyUI\models\text_encoders\unsloth\gemma-2-2b-it-bnb-4bit

Prompt executed in 80.75 seconds
got prompt
Unused kwargs: ['_load_in_4bit', '_load_in_8bit', 'quant_method']. These kwargs are not used in <class 'transformers.utils.quantization_config.BitsAndBytesConfig'>.
low_cpu_mem_usage was None, now set to True since model is quantized.
!!! Exception during processing !!! unsloth/gemma-2-2b-it-bnb-4bit does not appear to have a file named pytorch_model.bin, model.safetensors, tf_model.h5, model.ckpt or flax_model.msgpack.
Traceback (most recent call last):
File "F:\ComfyUI\ComfyUI\execution.py", line 328, in execute
output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "F:\ComfyUI\ComfyUI\execution.py", line 203, in get_output_data
return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "F:\ComfyUI\ComfyUI\execution.py", line 174, in _map_node_over_list
process_inputs(input_dict, i)
File "F:\ComfyUI\ComfyUI\execution.py", line 163, in process_inputs
results.append(getattr(obj, func)(**inputs))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "F:\ComfyUI\ComfyUI\custom_nodes\ComfyUI_ExtraModels\Gemma\nodes.py", line 63, in load_model
text_encoder_model = AutoModelForCausalLM.from_pretrained(model_name, torch_dtype=dtype)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "F:\ComfyUI\python_embeded\Lib\site-packages\transformers\models\auto\auto_factory.py", line 564, in from_pretrained
return model_class.from_pretrained(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "F:\ComfyUI\python_embeded\Lib\site-packages\transformers\modeling_utils.py", line 3708, in from_pretrained
raise EnvironmentError(
OSError: unsloth/gemma-2-2b-it-bnb-4bit does not appear to have a file named pytorch_model.bin, model.safetensors, tf_model.h5, model.ckpt or flax_model.msgpack.

unsloth/gemma-2-2b-it-bnb-4bit does not appear to have a file named pytorch_model.bin, model.safetensors, tf_model.h5, model.ckpt or flax_model.msgpack.
and gemma loader so slow: low_cpu_mem_usage was None, now set to True since model is quantized.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant