-
Notifications
You must be signed in to change notification settings - Fork 11
Running on windows #1
Comments
I'll try to get my windows VM running during this weekend, but in the meantime, can you please try with WSL? |
Update: can you try this guide? https://github.com/ngxson/alpaca.cpp-webui/blob/master/doc/windows.md |
Tried it as well. While the web UI runs, and doesn't yield any errors, the AI doesn't respond at all. [0] Socket proxy is initializing |
Me too Same exact thing. |
For the moment, I'm spending time to play with internal llama.cpp source code. The current implementation of llama.cpp/alpaca.cpp is not really stable (or suitable) for this application. Therefore, I may need more time to solve this issue. In the meantime, maybe you can try running the web ui from WSL? Btw, here a snippet of what I'm doing so far. I was able to save the model's internal memory context. I also be able to add some personalities to the bot:
|
I managed to get it working on WSL after some trial and error. |
I am also on Windows (10), webui runs fine but AI doesn't respond. Would be cool if we could get a fix. Appreciate your work though! |
I am trying to get this working on Windows 11 with GPT4All. There was an issue with setting the Port but I think I have resolved that by changing the package.json from
"proc_serve": "PORT=13000 next start",
to
"proc_serve": "set PORT=13000 react-scripts start",
however I now get the following exit code, I'm not a coder and can't figure out where the error is.
[0]
[0] > [email protected] proc_serve
[0] > set PORT=13000 react-scripts start
[0]
[1]
[1] > [email protected] proc_native
[1] > node ./utils/native.js
[1]
[0] npm run proc_serve exited with code 0
--> Sending SIGTERM to other processes..
[1] npm run proc_native exited with code 1
The text was updated successfully, but these errors were encountered: