Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Compile bug: ios swift xcode build error when upgrade to llama : use cmake for swift build #10747

Open
jiabochao opened this issue Dec 10, 2024 · 6 comments

Comments

@jiabochao
Copy link

Git commit

$git rev-parse HEAD 43ed389

Operating systems

Mac

GGML backends

Metal

Problem description & steps to reproduce

ios swift xcode build error when upgrade to

Before the upgrade, the code compiled successfully. After the upgrade, it throws a compilation error: "Cannot find type 'xxx' in scope."

image

First Bad Commit

43ed389

Relevant log output

/ios/llama.cpp.swift/LibLlama.swift:8:39 Cannot find type 'llama_batch' in scope

/ios/llama.cpp.swift/LibLlama.swift:12:37 Cannot find type 'llama_batch' in scope

/ios/llama.cpp.swift/LibLlama.swift:12:56 Cannot find type 'llama_token' in scope

/ios/llama.cpp.swift/LibLlama.swift:12:76 Cannot find type 'llama_pos' in scope

/ios/llama.cpp.swift/LibLlama.swift:12:99 Cannot find type 'llama_seq_id' in scope

/ios/llama.cpp.swift/LibLlama.swift:27:48 Cannot find type 'llama_sampler' in scope

/ios/llama.cpp.swift/LibLlama.swift:28:24 Cannot find type 'llama_batch' in scope

/ios/llama.cpp.swift/LibLlama.swift:29:31 Cannot find type 'llama_token' in scope

/ios/llama.cpp.swift/LibLlama.swift:44:22 Cannot find 'llama_batch_init' in scope

/ios/llama.cpp.swift/LibLlama.swift:46:23 Cannot find 'llama_sampler_chain_default_params' in scope

/ios/llama.cpp.swift/LibLlama.swift:47:25 Cannot find 'llama_sampler_chain_init' in scope

/ios/llama.cpp.swift/LibLlama.swift:48:9 Cannot find 'llama_sampler_chain_add' in scope

/ios/llama.cpp.swift/LibLlama.swift:48:48 Cannot find 'llama_sampler_init_temp' in scope

/ios/llama.cpp.swift/LibLlama.swift:49:9 Cannot find 'llama_sampler_chain_add' in scope

/ios/llama.cpp.swift/LibLlama.swift:49:48 Cannot find 'llama_sampler_init_dist' in scope

/ios/llama.cpp.swift/LibLlama.swift:53:9 Cannot find 'llama_sampler_free' in scope

/ios/llama.cpp.swift/LibLlama.swift:54:9 Cannot find 'llama_batch_free' in scope

/ios/llama.cpp.swift/LibLlama.swift:55:9 Cannot find 'llama_free' in scope

/ios/llama.cpp.swift/LibLlama.swift:56:9 Cannot find 'llama_free_model' in scope

/ios/llama.cpp.swift/LibLlama.swift:57:9 Cannot find 'llama_backend_free' in scope

/ios/llama.cpp.swift/LibLlama.swift:61:9 Cannot find 'llama_backend_init' in scope

/ios/llama.cpp.swift/LibLlama.swift:62:28 Cannot find 'llama_model_default_params' in scope

/ios/llama.cpp.swift/LibLlama.swift:68:21 Cannot find 'llama_load_model_from_file' in scope

/ios/llama.cpp.swift/LibLlama.swift:77:26 Cannot find 'llama_context_default_params' in scope

/ios/llama.cpp.swift/LibLlama.swift:82:23 Cannot find 'llama_new_context_with_model' in scope

/ios/llama.cpp.swift/LibLlama.swift:100:22 Cannot find 'llama_model_desc' in scope

/ios/llama.cpp.swift/LibLlama.swift:121:21 Cannot find 'llama_n_ctx' in scope

/ios/llama.cpp.swift/LibLlama.swift:142:12 Cannot find 'llama_decode' in scope

/ios/llama.cpp.swift/LibLlama.swift:150:27 Cannot find type 'llama_token' in scope

/ios/llama.cpp.swift/LibLlama.swift:152:24 Cannot find 'llama_sampler_sample' in scope

/ios/llama.cpp.swift/LibLlama.swift:154:12 Cannot find 'llama_token_is_eog' in scope

/ios/llama.cpp.swift/LibLlama.swift:185:12 Cannot find 'llama_decode' in scope

/ios/llama.cpp.swift/LibLlama.swift:211:13 Cannot find 'llama_kv_cache_clear' in scope

/ios/llama.cpp.swift/LibLlama.swift:213:30 Cannot find 'ggml_time_us' in scope

/ios/llama.cpp.swift/LibLlama.swift:215:16 Cannot find 'llama_decode' in scope

/ios/llama.cpp.swift/LibLlama.swift:218:13 Cannot find 'llama_synchronize' in scope

/ios/llama.cpp.swift/LibLlama.swift:220:28 Cannot find 'ggml_time_us' in scope

/ios/llama.cpp.swift/LibLlama.swift:224:13 Cannot find 'llama_kv_cache_clear' in scope

/ios/llama.cpp.swift/LibLlama.swift:226:30 Cannot find 'ggml_time_us' in scope

/ios/llama.cpp.swift/LibLlama.swift:235:20 Cannot find 'llama_decode' in scope

/ios/llama.cpp.swift/LibLlama.swift:238:17 Cannot find 'llama_synchronize' in scope

/ios/llama.cpp.swift/LibLlama.swift:241:28 Cannot find 'ggml_time_us' in scope

/ios/llama.cpp.swift/LibLlama.swift:243:13 Cannot find 'llama_kv_cache_clear' in scope

/ios/llama.cpp.swift/LibLlama.swift:245:24 No exact matches in call to initializer 

/ios/llama.cpp.swift/LibLlama.swift:246:24 No exact matches in call to initializer 

/ios/llama.cpp.swift/LibLlama.swift:254:32 Cannot convert value of type 'Duration' to expected argument type 'Double'

/ios/llama.cpp.swift/LibLlama.swift:255:32 Cannot convert value of type 'Duration' to expected argument type 'Double'

/ios/llama.cpp.swift/LibLlama.swift:272:64 Cannot find 'llama_model_size' in scope

/ios/llama.cpp.swift/LibLlama.swift:273:62 Cannot find 'llama_model_n_params' in scope

/ios/llama.cpp.swift/LibLlama.swift:293:9 Cannot find 'llama_kv_cache_clear' in scope

/ios/llama.cpp.swift/LibLlama.swift:296:60 Cannot find type 'llama_token' in scope

/ios/llama.cpp.swift/LibLlama.swift:299:43 Cannot find type 'llama_token' in scope

/ios/llama.cpp.swift/LibLlama.swift:300:26 Cannot find 'llama_tokenize' in scope

/ios/llama.cpp.swift/LibLlama.swift:302:27 Cannot find type 'llama_token' in scope

/ios/llama.cpp.swift/LibLlama.swift:313:40 Cannot find type 'llama_token' in scope

/ios/llama.cpp.swift/LibLlama.swift:319:23 Cannot find 'llama_token_to_piece' in scope

/ios/llama.cpp.swift/LibLlama.swift:327:30 Cannot find 'llama_token_to_piece' in scope

/ios/llama.cpp.swift/LibLlama.swift:328:33 Generic parameter 'Element' could not be inferred

~/Library/Developer/Xcode/DerivedData/Runner-efnwjojzxwrmmpfdjskgbtmftvem/SourcePackages/checkouts/llama.cpp/Sources/llama/llama.h
~/Library/Developer/Xcode/DerivedData/Runner-efnwjojzxwrmmpfdjskgbtmftvem/SourcePackages/checkouts/llama.cpp/Sources/llama/llama.h:3:10 'llama.h' file not found with <angled> include; use "quotes" instead
@nvoter
Copy link

nvoter commented Dec 11, 2024

same issue

@pgorzelany
Copy link

Can confirm, same issue

@slaren
Copy link
Collaborator

slaren commented Dec 11, 2024

The way it works now is that you need to build llama.cpp with cmake, and then install it using cmake --install. This should allow swift to find the llama.cpp library. See the way the CI builds the swift example:

- name: Build llama.cpp with CMake
id: cmake_build
run: |
sysctl -a
mkdir build
cd build
cmake -G Xcode .. \
-DGGML_METAL_USE_BF16=ON \
-DGGML_METAL_EMBED_LIBRARY=ON \
-DLLAMA_BUILD_EXAMPLES=OFF \
-DLLAMA_BUILD_TESTS=OFF \
-DLLAMA_BUILD_SERVER=OFF \
-DCMAKE_OSX_ARCHITECTURES="arm64;x86_64"
cmake --build . --config Release -j $(sysctl -n hw.logicalcpu)
sudo cmake --install . --config Release
- name: xcodebuild for swift package
id: xcodebuild
run: |
xcodebuild -scheme llama-Package -destination "${{ matrix.destination }}"

@pgorzelany
Copy link

First of all, thank you for providing some clarification. I don't usually use cmake so I am not familiar with the build process but the project still exposes a Package.swift file which seems to not work currently (even the example SwiftUI projects are broken).

Previously when developing for iOS and MacOS we could point Xcode to the llama.cpp swift package and it would "just work" which was pretty nice. If there are additional steps to be done now, can we have some additional documentation around the process?

@ggerganov
Copy link
Owner

ggerganov commented Dec 12, 2024

@pgorzelany Doing what the CI workflows do (see slaren's comment) should work.

The CI workflows install the llama.cpp binaries into the default system paths so your Swift project will automatically find them. However, you might not always want to do that. Instead, you can build different variants of the binaries (e.g. for iOS, tvOS, macOS, etc.) and install them into custom paths using CMAKE_INSTALL_PREFIX. After that you can point your project to that install location by updating the Build Settings in XCode. Here is how I configured the llama.swiftui example on my machine:

image

The process is a bit more involved than before, but it is more flexible and much easier to maintain. It would be useful to have step-by-step instructions added to the example, but I don't have much experience working with XCode (there is stuff like code signing, development teams, etc.), so I am hoping that people who are familiar will contribute and explain how to build a project correctly.

So atm, if you are looking for a point-and-click solution - there isn't one yet. You will need to understand how CMake works and start using it.

@pgorzelany
Copy link

Thank you. Once I understand how to properly set it up I will try to contribute some documentation around it. This project is used in multiple iOS and MacOS apps and it was very convenient to use it with the Package.swift file, maybe there is a way to modify the Package.swift to work again.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

5 participants