-
Notifications
You must be signed in to change notification settings - Fork 867
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Relationship to llama.cpp #10
Comments
Great question! Yes this is inherited from llama.cpp as noted in the Acknowledge section, we had pushed our model support code into llama.cpp via ggerganov/llama.cpp#7931, however there are some framework refinements in bitnet.cpp that have hard conflicts with original llama.cpp code, so that a new repo might be needed. |
Sorry, but this doesn't sound like a very credible reason. Especially given the MS history of taking over other (often FOSS) code and make it their own. It should be stated clearly up-front and not as a footnote, that this code is a fork from llama.cpp, and exactly why this fork was needed. Still waiting for someone to address the questions from @dokterbob . |
Can someone just do a pull request of what’s been done in here to llama.cpp? Thanks this is a better practice for me. |
Maybe try reading the contributor answer next time |
I share the same concerns. After checking the submodule in this repository (I personally dislike using submodules in Git), I found that it relies on an outdated fork of the original llama.cpp project. it is 320 commits behind https://github.com/Eddie-Wang1120/llama.cpp.git https://github.com/microsoft/BitNet/blob/main/.gitmodules#L3
Will Microsoft seriously support this project? This repository appears more like a personal project. |
Thanks for investigating! 💯
Indeed very suspicions, and seem more like some kind of clickbait project. They racked up 10,000 stars in no time, and nearly no commits or useful feedback, since. I hate to sound negative, but I hate even more to get involved in these kind of unethical/corporate side hustles. In addition, I also hate sub-modules! :( Avoiding huge number of various external files may be the very reason why llama.cpp was so successful: 1 screen, 1 editor, 1 page, 1 tab and |
It would be great to see the upstream PR from their fork of llama.cpp |
First of all: CONGRATS ON YOUR AMAZING RESEARCH WORK.
Considering that this is using GGML and seems based directly on
llama.cpp
:Why is this a separate project to
llama.cpp
, given thatllama.cpp
already supports BitNet ternary quants? (ggerganov/llama.cpp#8151)Are these simply more optimised kernels?
If so, how do they compare to llama's implementation?
Can/should they be contributed back to
llama.cpp
?The text was updated successfully, but these errors were encountered: