Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Feature Request: GLM-4 9B Support #7778

Closed
4 tasks done
arch-btw opened this issue Jun 5, 2024 · 7 comments
Closed
4 tasks done

Feature Request: GLM-4 9B Support #7778

arch-btw opened this issue Jun 5, 2024 · 7 comments
Labels
enhancement New feature or request stale

Comments

@arch-btw
Copy link
Contributor

arch-btw commented Jun 5, 2024

Prerequisites

  • I am running the latest code. Mention the version if possible as well.
  • I carefully followed the README.md.
  • I searched using keywords relevant to my issue to make sure that I am creating a new issue that is not already open (or closed).
  • I reviewed the Discussions, and have a new and useful enhancement to share.

Feature Description

It would be really cool to have support for these models that were released today. They have some very impressive benchmarks. I've also been trying out the model in huggingface spaces myself and noticed it speaks a lot of languages fluently and is knowledgeable on many topics. Thank you for your time.

Here are the download links:

Here is the English README: README_en.md

Motivation

The motivation for this feature are found in some of the technical highlights for this model:

  • These models were trained on 10T tokens.
  • GLM-4-9B-Chat models have 9B parameters.
  • GLM-4-9B-Chat-1M model supports 1M context length and scored 100% on the needle in haystack challenge.
  • GLM-4-9B models support 26 languages.
  • Has a vision model (glm-4v-9b).
  • Early impressions are impressive.

Here are some of the results:

Needle challenge:

eval_needle

Longbench:

longbench

Possible Implementation

We might be able to use some of the code from: #6999.

There is also chatglm.cpp but it doesn't support GLM-4.

@arch-btw arch-btw added the enhancement New feature or request label Jun 5, 2024
@foldl
Copy link
Contributor

foldl commented Jun 7, 2024

You can try chatllm.cpp, which supports GLM-4.

@jamfor352
Copy link

You can try chatllm.cpp, which supports GLM-4.

Can confirm this works and is cool 😎

It would be good to get this functionality in Llama.cpp too, if only for the GPU acceleration

@ELigoP
Copy link

ELigoP commented Jun 8, 2024

You can try chatllm.cpp, which supports GLM-4.

Well, it chatllm.cpp is CPU-only. Why not trying transformers version in fp16.

llama.cpp GPU support for GLM-4 would be great, and then quantized versions will appear, which will be even more comfortable to run.

This GLM-4 looks like comparable or beating LLama 3, maybe even best-in-class for now.

@matteoserva
Copy link
Contributor

We might have this feature soon: #8031

@github-actions github-actions bot added the stale label Jul 21, 2024
Copy link
Contributor

github-actions bot commented Aug 5, 2024

This issue was closed because it has been inactive for 14 days since being marked as stale.

@github-actions github-actions bot closed this as completed Aug 5, 2024
@yukiarimo
Copy link

Any updates?

@yukiarimo
Copy link

I saw it's merged, but does it works with llama-cop-python and how to get vision stuff working in gguf?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request stale
Projects
None yet
Development

No branches or pull requests

6 participants