-
As soon as I have \n in my prompt string, the model just answers garbage. I know it works well with linebreaks from using llama.cpp directly. Any suspicions what is wrong or what I am doing wrong? I am using this on windows. Maybe llama.cpp does not compile to the expected text format or something? I really don't know. My prompt is really just the basic high level api example and using a \n in the string that will then be given like output = llm(prompt). |
Beta Was this translation helpful? Give feedback.
Replies: 1 comment
-
It seems to have been caused by a mismatching llama.cpp checkout. Since it seemed to work, i assumed it was fine at first. Btw. I checked llama.cpp out myself because the recursive checkout did not work due to linking it via http or something. I think there's an issue about it. Cheers, thanks for this awesome project! |
Beta Was this translation helpful? Give feedback.
It seems to have been caused by a mismatching llama.cpp checkout. Since it seemed to work, i assumed it was fine at first. Btw. I checked llama.cpp out myself because the recursive checkout did not work due to linking it via http or something. I think there's an issue about it. Cheers, thanks for this awesome project!