Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

b2039 #80

Merged
merged 5 commits into from
Feb 1, 2024
Merged

b2039 #80

merged 5 commits into from
Feb 1, 2024

Conversation

Nexesenex
Copy link
Owner

No description provided.

ggerganov and others added 5 commits January 31, 2024 17:30
* llama : remove LLAMA_MAX_DEVICES from llama.h

ggml-ci

* Update llama.cpp

Co-authored-by: slaren <[email protected]>

* server : remove LLAMA_MAX_DEVICES

ggml-ci

* llama : remove LLAMA_SUPPORTS_GPU_OFFLOAD

ggml-ci

* train : remove LLAMA_SUPPORTS_GPU_OFFLOAD

* readme : add deprecation notice

* readme : change deprecation notice to "remove" and fix url

* llama : remove gpu includes from llama.h

ggml-ci

---------

Co-authored-by: slaren <[email protected]>
* build vulkan as object

* vulkan ci
* support InternLM2 inference
  * add add_space_prefix KV pair
@Nexesenex Nexesenex merged commit aa04d1e into Nexesenex:_master_up Feb 1, 2024
3 checks passed
Nexesenex pushed a commit that referenced this pull request Dec 22, 2024
* Slightly better

* Make the entire project c++17

---------

Co-authored-by: Iwan Kawrakow <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants