Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support for deepseek-vl2 models #94

Open
watzon opened this issue Feb 7, 2025 · 1 comment
Open

Support for deepseek-vl2 models #94

watzon opened this issue Feb 7, 2025 · 1 comment
Assignees
Labels
to-investigate Needs looking into

Comments

@watzon
Copy link

watzon commented Feb 7, 2025

For some reason the deepseek-vl2 models by mlx-community seem to not work correctly. They will download and load into memory, but when trying to use one (such as mlx-community/deepseek-vl2-small-4bit) I get the error:

Error in iterating prediction stream: TypeError: object of type 'NoneType' has no len()
@yagil yagil added the to-investigate Needs looking into label Feb 7, 2025
@Devil-Mix
Copy link

I'm have issues too with vision model Deepseek-vl2-small-4bit on an m1 pro chip with 16GB Ram.

LOG

[2025-02-09 11:13:09.299] [error] [LMSInternal][Client=plugin:builtin:lmstudio/default-generator][Endpoint=predict] Error in channel handler: Error: received prediction-error
at _0x570e88. (/Applications/LM Studio.app/Contents/Resources/app/.webpack/main/index.js:411:106461)
at _0x307662._0x5a91ce (/Applications/LM Studio.app/Contents/Resources/app/.webpack/main/index.js:24:273120)
at _0x307662.emit (node:events:519:28)
at _0x307662.onChildMessage (/Applications/LM Studio.app/Contents/Resources/app/.webpack/main/index.js:24:244622)
at _0x307662.onChildMessage (/Applications/LM Studio.app/Contents/Resources/app/.webpack/main/index.js:24:290626)
at ForkUtilityProcess. (/Applications/LM Studio.app/Contents/Resources/app/.webpack/main/index.js:24:243563)
at ForkUtilityProcess.emit (node:events:519:28)
at ForkUtilityProcess.a.emit (node:electron/js2c/browser_init:2:71823)

  • Caused By: Error: Error in iterating prediction stream: AssertionError:
    at _0x477940..predictTokens (/Applications/LM Studio.app/Contents/Resources/app/.webpack/lib/llmworker.js:9:49452)
    at async Object.predictTokens (/Applications/LM Studio.app/Contents/Resources/app/.webpack/lib/llmworker.js:14:10574)
    at async Object.handleMessage (/Applications/LM Studio.app/Contents/Resources/app/.webpack/lib/llmworker.js:14:2040)
    [2025-02-09 11:13:09.300] [error] [LMSInternal][Client=LM Studio][Endpoint=regenerateLastMessage] Error in RPC handler: Error: Rehydrated error
    ting prediction stream: AssertionError:
    at _0x570e88. (/Applications/LM Studio.app/Contents/Resources/app/.webpack/main/index.js:411:106461)
    at _0x5ee1de.subscriber (/Applications/LM Studio.app/Contents/Resources/app/.webpack/main/index.js:84:1886)
    at _0x5ee1de.notifier (/Applications/LM Studio.app/Contents/Resources/app/.webpack/main/index.js:245:138165)
  • Caused By: Error: Channel Error
    at (/Applications/LM Studio.app/Contents/Resources/app/.webpack/main/index.js:106:63279)
  • Caused By: Error: received prediction-error
    at _0x570e88. (/Applications/LM Studio.app/Contents/Resources/app/.webpack/main/index.js:411:106461)
    at _0x307662._0x5a91ce (/Applications/LM Studio.app/Contents/Resources/app/.webpack/main/index.js:24:273120)
    at _0x307662.emit (node:events:519:28)
    at _0x307662.onChildMessage (/Applications/LM Studio.app/Contents/Resources/app/.webpack/main/index.js:24:244622)
    at _0x307662.onChildMessage (/Applications/LM Studio.app/Contents/Resources/app/.webpack/main/index.js:24:290626)
    at ForkUtilityProcess. (/Applications/LM Studio.app/Contents/Resources/app/.webpack/main/index.js:24:243563)
    at ForkUtilityProcess.emit (node:events:519:28)
    at ForkUtilityProcess.a.emit (node:electron/js2c/browser_init:2:71823)
  • Caused By: Error: Error in iterating prediction stream: AssertionError:
    at _0x477940..predictTokens (/Applications/LM Studio.app/Contents/Resources/app/.webpack/lib/llmworker.js:9:49452)
    at async Object.predictTokens (/Applications/LM Studio.app/Contents/Resources/app/.webpack/lib/llmworker.js:14:10574)
    at async Object.handleMessage (/Applications/LM Studio.app/Contents/Resources/app/.webpack/lib/llmworker.js:14:2040)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
to-investigate Needs looking into
Projects
None yet
Development

No branches or pull requests

4 participants