Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ggml : fix missing cpu_set_t on emscripten #9336

Merged
merged 3 commits into from
Sep 7, 2024

Conversation

ngxson
Copy link
Collaborator

@ngxson ngxson commented Sep 6, 2024

cpu_set_t doesn't exist on emscripten, we need to add #ifdef __gnu_linux__. The feature was added in #8672

The same method was already used in the other part of the code (for numa):

llama.cpp/ggml/src/ggml.c

Lines 3134 to 3136 in 409dc4f

#if defined(__gnu_linux__)
cpu_set_t cpuset; // cpuset from numactl
#else


@ngxson ngxson requested a review from slaren September 6, 2024 14:30
@slaren
Copy link
Collaborator

slaren commented Sep 6, 2024

@max-krasnyansky @fmz please take a look.

@fmz
Copy link
Contributor

fmz commented Sep 6, 2024

@ngxson this should be handled by adding __GNU_SOURCE before including <sched.h> (pretty finicky, I know)

https://github.com/ggerganov/llama.cpp/pull/8672/files#diff-d4bb67eec77d05ad8d4056c50492de104bd17dc6e0b0f9919854aeeaa5002842L1250

Did something change?

@github-actions github-actions bot added the ggml changes relating to the ggml tensor library for machine learning label Sep 6, 2024
@ngxson
Copy link
Collaborator Author

ngxson commented Sep 6, 2024

@fmz for android: yes, it's still there:

if (CMAKE_SYSTEM_NAME MATCHES "Linux" OR CMAKE_SYSTEM_NAME MATCHES "Android")
add_compile_definitions(_GNU_SOURCE)
endif()

But since I can't test the android build, I think it's better to reduce the scope of this PR to only target emscripten. We can continue the discussion for android on #9324

For emscripten, even with #define __GNU_SOURCE at the beginning of the file, it cannot build.

To reproduce, follow this guile: https://github.com/ngxson/wllama?tab=readme-ov-file#how-to-compile-the-binary-yourself

Any idea though?

@ngxson ngxson changed the title ggml : fix missing cpu_set_t on emscripten and android ggml : fix missing cpu_set_t on emscripten Sep 6, 2024
@ngxson
Copy link
Collaborator Author

ngxson commented Sep 6, 2024

Also, in other part of code:

llama.cpp/ggml/src/ggml.c

Lines 19179 to 19181 in 409dc4f

// Android's libc implementation "bionic" does not support setting affinity
#if defined(__gnu_linux__)
static void set_numa_thread_affinity(int thread_n) {

So I think it will make more sense if we have the same check for ggml_thread_apply_affinity and ggml_thread_apply_priority? It's strange to see that we have different checks for the same feature.

@fmz
Copy link
Contributor

fmz commented Sep 6, 2024

Also, in other part of code:

llama.cpp/ggml/src/ggml.c

Lines 19179 to 19181 in 409dc4f

// Android's libc implementation "bionic" does not support setting affinity
#if defined(__gnu_linux__)
static void set_numa_thread_affinity(int thread_n) {

So I think it will make more sense if we have the same check for ggml_thread_apply_affinity and ggml_thread_apply_priority? It's strange to see that we have different checks for the same feature.

yeah I would suggest you do that

@fmz for android: yes, it's still there:

if (CMAKE_SYSTEM_NAME MATCHES "Linux" OR CMAKE_SYSTEM_NAME MATCHES "Android")
add_compile_definitions(_GNU_SOURCE)
endif()

But since I can't test the android build, I think it's better to reduce the scope of this PR to only target emscripten. We can continue the discussion for android on #9324

For emscripten, even with #define __GNU_SOURCE at the beginning of the file, it cannot build.

To reproduce, follow this guile: https://github.com/ngxson/wllama?tab=readme-ov-file#how-to-compile-the-binary-yourself

Any idea though?

Interesting...
Would it be possible to specifically check for emscripten?

@fmz
Copy link
Contributor

fmz commented Sep 6, 2024

Also, in other part of code:

llama.cpp/ggml/src/ggml.c

Lines 19179 to 19181 in 409dc4f

// Android's libc implementation "bionic" does not support setting affinity
#if defined(__gnu_linux__)
static void set_numa_thread_affinity(int thread_n) {

So I think it will make more sense if we have the same check for ggml_thread_apply_affinity and ggml_thread_apply_priority? It's strange to see that we have different checks for the same feature.

I'm not sure I follow. They're both under the same #if/else branches

@ngxson
Copy link
Collaborator Author

ngxson commented Sep 6, 2024

what I mean is:

  • ggml_get_numa_affinity is only enabled with #if defined(__gnu_linux__)
  • ggml_thread_apply_affinity is currently enabled for ALL platforms (except Apple which uses a different implementation)

See the code:

llama.cpp/ggml/src/ggml.c

Lines 19525 to 19527 in 134bc38

#elif defined(__APPLE__)
#include <sys/types.h>
#include <sys/resource.h>

Then below that:

llama.cpp/ggml/src/ggml.c

Lines 19558 to 19560 in 134bc38

#else // posix?

My question is: can we simply apply the same #if defined(__gnu_linux__) used by ggml_get_numa_affinity?

@fmz
Copy link
Contributor

fmz commented Sep 6, 2024

llama.cpp/ggml/src/ggml.c

Ah yes
I see no harm in that

@ngxson
Copy link
Collaborator Author

ngxson commented Sep 6, 2024

OK thanks for the confirmation.

So the current PR is exactly that: the code branch using cpu_set_t is only enabled with #if defined(__gnu_linux__). Otherwise, it will be no-op.

This should fix compilation on both emscripten & android (issue #9324), cc @slaren can you give a review? Thanks!

@ngxson ngxson merged commit 947538a into ggerganov:master Sep 7, 2024
52 checks passed
dsx1986 pushed a commit to dsx1986/llama.cpp that referenced this pull request Oct 29, 2024
* ggml : fix missing cpu_set_t on emscripten

* better version

* bring back android part
arthw pushed a commit to arthw/llama.cpp that referenced this pull request Nov 15, 2024
* ggml : fix missing cpu_set_t on emscripten

* better version

* bring back android part
arthw pushed a commit to arthw/llama.cpp that referenced this pull request Nov 18, 2024
* ggml : fix missing cpu_set_t on emscripten

* better version

* bring back android part
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
ggml changes relating to the ggml tensor library for machine learning
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants