-
Notifications
You must be signed in to change notification settings - Fork 991
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
The latest version kills python kernel with LlamaGrammar #1623
Comments
I'm experiencing something similar on x86_64 with CUDA since 0.2.84 - segfault when calling LlamaGrammar through the JSON-schema binding. No trouble without the grammar constraint. Rolling back to 0.2.83 fixes it. |
Which do you think should we blame for it, llama-cpp-python or native Llama.cpp? |
I do reproduce the same error. After entering the code you provided the program stop. The scenario where the problem occurs can be found in detailed in #1636 (and the hardware/software specification). Rolling back to 0.2.82 seems to fix the problem: the code provided execute without crash/issue on previous version. |
I found recent Llama.cpp did split the main source code (llama.cpp) into several part;
These source files found in the latest release cannot be found in releases a week ago. |
Looks like work is underway: #1637 |
Anyone find a solution for this ? |
Getting the same error, both in colab and in personal. On vs code it comes up as an OS Error or Win Error. |
Pretty sure this PR addresses it, pending the next release: #1649 |
Prerequisites
Please answer the following questions for yourself before submitting an issue.
Expected Behavior and Current Behavior
The latest version of llama-cpp-python kills python kernel with LlamaGrammar.
I ran the following code:
When it ran, the python kernel died immediately for unknown reason. Dying kernel doesn't happen without use of
LlamaGrammar
.Because this behavior has not been observed recently (actually few days ago), I suspect my recent update of llama-cpp-python module made this problem.
What I tried is:
My experiment might suggest this problem comes from backend Llama.cpp and is not llama-cpp-python's fault.
But anyway, I wanna know whether people is experiencing this bug.
Environment
OS: macOS Sonoma
Processor: M2Max 64GB.
Python version: 11
The text was updated successfully, but these errors were encountered: