Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ERROR Error: running 'make' failed -- This is in Linux Lite #335

Closed
luisvolkyazul opened this issue Apr 1, 2023 · 8 comments
Closed

ERROR Error: running 'make' failed -- This is in Linux Lite #335

luisvolkyazul opened this issue Apr 1, 2023 · 8 comments

Comments

@luisvolkyazul
Copy link

make
I llama.cpp build info:
I UNAME_S: Linux
I UNAME_P: x86_64
I UNAME_M: x86_64
I CFLAGS: -I. -O3 -DNDEBUG -std=c11 -fPIC -pthread -mavx -msse3
I CXXFLAGS: -I. -I./examples -O3 -DNDEBUG -std=c++11 -fPIC -pthread
I LDFLAGS:
I CC: cc (Ubuntu 11.3.0-1ubuntu122.04) 11.3.0
I CXX: g++ (Ubuntu 11.3.0-1ubuntu1
22.04) 11.3.0

cc -I. -O3 -DNDEBUG -std=c11 -fPIC -pthread -mavx -msse3 -c ggml.c -o ggml.o
In file included from /usr/lib/gcc/x86_64-linux-gnu/11/include/immintrin.h:101,
from ggml.c:155:
ggml.c: In function ‘ggml_vec_dot_f16’:
/usr/lib/gcc/x86_64-linux-gnu/11/include/f16cintrin.h:52:1: error: inlining failed in call to ‘always_inline’ ‘_mm256_cvtph_ps’: target specific option mismatch
52 | _mm256_cvtph_ps (__m128i __A)
| ^~~~~~~~~~~~~~~
ggml.c:915:33: note: called from here
915 | #define GGML_F32Cx8_LOAD(x) _mm256_cvtph_ps(_mm_loadu_si128((__m128i )(x)))
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
ggml.c:925:37: note: in expansion of macro ‘GGML_F32Cx8_LOAD’
925 | #define GGML_F16_VEC_LOAD(p, i) GGML_F32Cx8_LOAD(p)
| ^~~~~~~~~~~~~~~~
ggml.c:1319:21: note: in expansion of macro ‘GGML_F16_VEC_LOAD’
1319 | ay[j] = GGML_F16_VEC_LOAD(y + i + j
GGML_F16_EPR, j);
| ^~~~~~~~~~~~~~~~~
In file included from /usr/lib/gcc/x86_64-linux-gnu/11/include/immintrin.h:101,
from ggml.c:155:
/usr/lib/gcc/x86_64-linux-gnu/11/include/f16cintrin.h:52:1: error: inlining failed in call to ‘always_inline’ ‘_mm256_cvtph_ps’: target specific option mismatch
52 | _mm256_cvtph_ps (__m128i __A)
| ^~~~~~~~~~~~~~~
ggml.c:915:33: note: called from here
915 | #define GGML_F32Cx8_LOAD(x) _mm256_cvtph_ps(_mm_loadu_si128((__m128i )(x)))
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
ggml.c:925:37: note: in expansion of macro ‘GGML_F32Cx8_LOAD’
925 | #define GGML_F16_VEC_LOAD(p, i) GGML_F32Cx8_LOAD(p)
| ^~~~~~~~~~~~~~~~
ggml.c:1318:21: note: in expansion of macro ‘GGML_F16_VEC_LOAD’
1318 | ax[j] = GGML_F16_VEC_LOAD(x + i + j
GGML_F16_EPR, j);
| ^~~~~~~~~~~~~~~~~
In file included from /usr/lib/gcc/x86_64-linux-gnu/11/include/immintrin.h:101,
from ggml.c:155:
/usr/lib/gcc/x86_64-linux-gnu/11/include/f16cintrin.h:52:1: error: inlining failed in call to ‘always_inline’ ‘_mm256_cvtph_ps’: target specific option mismatch
52 | _mm256_cvtph_ps (__m128i __A)
| ^~~~~~~~~~~~~~~
ggml.c:915:33: note: called from here
915 | #define GGML_F32Cx8_LOAD(x) _mm256_cvtph_ps(_mm_loadu_si128((__m128i )(x)))
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
ggml.c:925:37: note: in expansion of macro ‘GGML_F32Cx8_LOAD’
925 | #define GGML_F16_VEC_LOAD(p, i) GGML_F32Cx8_LOAD(p)
| ^~~~~~~~~~~~~~~~
ggml.c:1318:21: note: in expansion of macro ‘GGML_F16_VEC_LOAD’
1318 | ax[j] = GGML_F16_VEC_LOAD(x + i + j
GGML_F16_EPR, j);
| ^~~~~~~~~~~~~~~~~
In file included from /usr/lib/gcc/x86_64-linux-gnu/11/include/immintrin.h:101,
from ggml.c:155:
/usr/lib/gcc/x86_64-linux-gnu/11/include/f16cintrin.h:52:1: error: inlining failed in call to ‘always_inline’ ‘_mm256_cvtph_ps’: target specific option mismatch
52 | _mm256_cvtph_ps (__m128i __A)
| ^~~~~~~~~~~~~~~
ggml.c:915:33: note: called from here
915 | #define GGML_F32Cx8_LOAD(x) _mm256_cvtph_ps(_mm_loadu_si128((__m128i )(x)))
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
ggml.c:925:37: note: in expansion of macro ‘GGML_F32Cx8_LOAD’
925 | #define GGML_F16_VEC_LOAD(p, i) GGML_F32Cx8_LOAD(p)
| ^~~~~~~~~~~~~~~~
ggml.c:1319:21: note: in expansion of macro ‘GGML_F16_VEC_LOAD’
1319 | ay[j] = GGML_F16_VEC_LOAD(y + i + j
GGML_F16_EPR, j);
| ^~~~~~~~~~~~~~~~~
In file included from /usr/lib/gcc/x86_64-linux-gnu/11/include/immintrin.h:101,
from ggml.c:155:
/usr/lib/gcc/x86_64-linux-gnu/11/include/f16cintrin.h:52:1: error: inlining failed in call to ‘always_inline’ ‘_mm256_cvtph_ps’: target specific option mismatch
52 | _mm256_cvtph_ps (__m128i __A)
| ^~~~~~~~~~~~~~~
ggml.c:915:33: note: called from here
915 | #define GGML_F32Cx8_LOAD(x) _mm256_cvtph_ps(_mm_loadu_si128((__m128i )(x)))
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
ggml.c:925:37: note: in expansion of macro ‘GGML_F32Cx8_LOAD’
925 | #define GGML_F16_VEC_LOAD(p, i) GGML_F32Cx8_LOAD(p)
| ^~~~~~~~~~~~~~~~
ggml.c:1318:21: note: in expansion of macro ‘GGML_F16_VEC_LOAD’
1318 | ax[j] = GGML_F16_VEC_LOAD(x + i + j
GGML_F16_EPR, j);
| ^~~~~~~~~~~~~~~~~
In file included from /usr/lib/gcc/x86_64-linux-gnu/11/include/immintrin.h:101,
from ggml.c:155:
/usr/lib/gcc/x86_64-linux-gnu/11/include/f16cintrin.h:52:1: error: inlining failed in call to ‘always_inline’ ‘_mm256_cvtph_ps’: target specific option mismatch
52 | _mm256_cvtph_ps (__m128i __A)
| ^~~~~~~~~~~~~~~
ggml.c:915:33: note: called from here
915 | #define GGML_F32Cx8_LOAD(x) _mm256_cvtph_ps(_mm_loadu_si128((__m128i )(x)))
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
ggml.c:925:37: note: in expansion of macro ‘GGML_F32Cx8_LOAD’
925 | #define GGML_F16_VEC_LOAD(p, i) GGML_F32Cx8_LOAD(p)
| ^~~~~~~~~~~~~~~~
ggml.c:1319:21: note: in expansion of macro ‘GGML_F16_VEC_LOAD’
1319 | ay[j] = GGML_F16_VEC_LOAD(y + i + j
GGML_F16_EPR, j);
| ^~~~~~~~~~~~~~~~~
make: *** [Makefile:221: ggml.o] Error 1
 isul  (e) myenv   master  ~  dalai  llama  2  exit
exit
ERROR Error: running 'make' failed
at LLaMA.make (/usr/lib/node_modules/dalai/llama.js:50:15)
at async Dalai.add (/usr/lib/node_modules/dalai/index.js:412:5)
at async Dalai.install (/usr/lib/node_modules/dalai/index.js:346:5)

@VanHallein
Copy link

VanHallein commented Apr 1, 2023

To my understanding, the build process for alpaca.cpp and lama.cpp are broken using make. However, it's working for CMake without any issues (needs GCC 10).

When reading the code, it does CMake for Windows, but regular make for Linux. That's why the build succeeds under MS Windows. Somebody please change the code for Linux build to cmake.

@luisvolkyazul
Copy link
Author

To my understanding, the build process for alpaca.cpp and lama.cpp are broken using make. However, it's working for CMake without any issues (needs GCC 10).

When reading the code, it does CMake for Windows, but regular make for Linux. That's why the build succeeds under MS Windows. Somebody please change the code for Linux build to cmake.

Thank you for the information, I made the change in the alpaca.js and ran cmake. It did not produce the same error, and appears to have exited without issues, but once I run npx dalai serve and navigate to the url it shows the interface but no models to select from. Any suggestions beyond this point are appreciated!

@VanHallein
Copy link

You need to run cmake the following way:

cmake .
cmake --build . --config Release

Only that way it compiles fully. How are you running cmake?

@luisvolkyazul
Copy link
Author

VanHallein, thank you so much for your assistance with this! I had only changed the alpaca.js to cmake. Then I ran cmake .

I now reran like you mentioned above and it appears to have worked, I see the model, and did the same with llama. But now I am getting a different error, see below:

npx dalai serve
mkdir /home/luis/dalai
Server running on http://localhost:3000/

query: { prompt: '/stop', models: [] }
require log TypeError: Cannot read properties of undefined (reading 'onData')
at module.exports (/home/luis/node_modules/dalai/cmds/stop.js:2:18)
at Dalai.query (/home/luis/node_modules/dalai/index.js:207:11)
at Socket. (/home/luis/node_modules/dalai/index.js:534:20)
at Socket.emit (node:events:512:28)
at Socket.emitUntyped (/home/luis/node_modules/socket.io/dist/typed-events.js:69:22)
at /home/luis/node_modules/socket.io/dist/socket.js:703:39
at process.processTicksAndRejections (node:internal/process/task_queues:77:11)
/home/luis/node_modules/dalai/index.js:219
let [Core, Model] = req.model.split(".")
^

TypeError: Cannot read properties of undefined (reading 'split')
at Dalai.query (/home/luis/node_modules/dalai/index.js:219:35)
at Socket. (/home/luis/node_modules/dalai/index.js:534:20)
at Socket.emit (node:events:512:28)
at Socket.emitUntyped (/home/luis/node_modules/socket.io/dist/typed-events.js:69:22)
at /home/luis/node_modules/socket.io/dist/socket.js:703:39
at process.processTicksAndRejections (node:internal/process/task_queues:77:11)

Node.js v19.8.1

@luisvolkyazul
Copy link
Author

I found this: fix the issue of the split function #348. Trying it now.

@luisvolkyazul
Copy link
Author

I can now see the alpaca model name on the web interface, but not sure if I am using it wrong since I submit and it just hangs and does nothing. The llama model name does not show up though.

@luisvolkyazul
Copy link
Author

npx dalai serve
mkdir /home/luis/dalai
Server running on http://localhost:3000/

query: { method: 'installed', models: [] }
modelsPath /home/luis/dalai/alpaca/models
{ modelFolders: [ '7B' ] }
exists 7B
modelsPath /home/luis/dalai/llama/models
{ modelFolders: [ '7B' ] }
query: {
seed: -1,
threads: 4,
n_predict: 200,
top_k: 40,
top_p: 0.9,
temp: 0.8,
repeat_last_n: 64,
repeat_penalty: 1.3,
debug: false,
models: [ 'alpaca.7B' ],
prompt: 'The expected response for a highly intelligent chatbot to "what is a car?" is \n' +
'"',
id: 'TS-1680547166766-8435'
}
{ Core: 'alpaca', Model: '7B' }
exec: /home/luis/dalai/alpaca/main --seed -1 --threads 4 --n_predict 200 --model models/7B/ggml-model-q4_0.bin --top_k 40 --top_p 0.9 --temp 0.8 --repeat_last_n 64 --repeat_penalty 1.3 -p "The expected response for a highly intelligent chatbot to "what is a car?" is
"" in /home/luis/dalai/alpaca

@luisvolkyazul
Copy link
Author

accidentally closed it...tried adding what shows up when running the alpaca model

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants