Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Megrez-3B-Omni模型支持 #12568

Closed
juan-OY opened this issue Dec 18, 2024 · 2 comments
Closed

Megrez-3B-Omni模型支持 #12568

juan-OY opened this issue Dec 18, 2024 · 2 comments
Assignees

Comments

@juan-OY
Copy link

juan-OY commented Dec 18, 2024

模型支持报错:
https://huggingface.co/Infinigence/Megrez-3B-Omni

Transformers: 4.45.0 LNL platform
C:\Users\test\miniforge3\envs\ipex-llm\lib\site-packages\transformers\models\auto\image_processing_auto.py:517: FutureWarning: The image_processor_class argument is deprecated and will be removed in v4.42. Please use slow_image_processor_class, or fast_image_processor_class instead
warnings.warn(
Traceback (most recent call last):
File "D:\Mergrez-3B-Omni\generate.py", line 42, in
response = model.chat(
File "C:\Users\test\miniforge3\envs\ipex-llm\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "C:\Users\test.cache\huggingface\modules\transformers_modules\Megrez-3B-Omni\modeling_megrezo.py", line 330, in chat
output_ids = self.generate(**data, **generation_config)
File "C:\Users\test.cache\huggingface\modules\transformers_modules\Megrez-3B-Omni\modeling_megrezo.py", line 288, in generate
input_ids, input_embeds, position_ids = self.compose_embeddings(data)
File "C:\Users\test.cache\huggingface\modules\transformers_modules\Megrez-3B-Omni\modeling_megrezo.py", line 230, in compose_embeddings
embeddings_image = self.vision(pixel_values, tgt_sizes, patch_attention_mask=patch_attention_mask)
File "C:\Users\test\miniforge3\envs\ipex-llm\lib\site-packages\torch\nn\modules\module.py", line 1532, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "C:\Users\test\miniforge3\envs\ipex-llm\lib\site-packages\torch\nn\modules\module.py", line 1541, in _call_impl
return forward_call(*args, **kwargs)
File "C:\Users\test.cache\huggingface\modules\transformers_modules\Megrez-3B-Omni\modeling_megrezo.py", line 173, in forward
embedding = self.resampler(embedding, tgt_sizes)
File "C:\Users\test\miniforge3\envs\ipex-llm\lib\site-packages\torch\nn\modules\module.py", line 1532, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "C:\Users\test\miniforge3\envs\ipex-llm\lib\site-packages\torch\nn\modules\module.py", line 1541, in _call_impl
return forward_call(*args, **kwargs)
File "C:\Users\test.cache\huggingface\modules\transformers_modules\Megrez-3B-Omni\resampler.py", line 174, in forward
out = self.attn(
File "C:\Users\test\miniforge3\envs\ipex-llm\lib\site-packages\torch\nn\modules\module.py", line 1532, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "C:\Users\test\miniforge3\envs\ipex-llm\lib\site-packages\torch\nn\modules\module.py", line 1541, in _call_impl
return forward_call(*args, **kwargs)
File "C:\Users\test\miniforge3\envs\ipex-llm\lib\site-packages\torch\nn\modules\activation.py", line 1266, in forward
attn_output, attn_output_weights = F.multi_head_attention_forward(
File "C:\Users\test\miniforge3\envs\ipex-llm\lib\site-packages\torch\nn\functional.py", line 5477, in multi_head_attention_forward
attn_output = linear(attn_output, out_proj_weight, out_proj_bias)
RuntimeError: self and mat2 must have the same dtype, but got Half and Byte

@MeouSker77 MeouSker77 self-assigned this Dec 18, 2024
@MeouSker77
Copy link
Contributor

supported in #12582, try latest ipex-llm: pip install --pre --upgrade ipex-llm

@juan-OY
Copy link
Author

juan-OY commented Dec 20, 2024

问题已经解决,thanks

@juan-OY juan-OY closed this as completed Dec 20, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants