Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Axios error when attempt npx dalai alpaca install 13B #350

Open
MichaelCharles opened this issue Apr 2, 2023 · 18 comments
Open

Axios error when attempt npx dalai alpaca install 13B #350

MichaelCharles opened this issue Apr 2, 2023 · 18 comments

Comments

@MichaelCharles
Copy link

I'm on a M1 Macbook Pro.
Node 18.3

Here is the output in my terminal:

npx dalai alpaca install 13B
mkdir /Users/itsame/dalai
{ method: 'install', callparams: [ '13B' ] }
mkdir /Users/itsame/dalai/alpaca
try fetching /Users/itsame/dalai/alpaca https://github.com/ItsPi3141/alpaca.cpp
[E] Pull TypeError: Cannot read properties of null (reading 'split')
    at new GitConfig (/Users/itsame/.npm/_npx/3c737cbb02d79cc9/node_modules/isomorphic-git/index.cjs:1604:30)
    at GitConfig.from (/Users/itsame/.npm/_npx/3c737cbb02d79cc9/node_modules/isomorphic-git/index.cjs:1627:12)
    at GitConfigManager.get (/Users/itsame/.npm/_npx/3c737cbb02d79cc9/node_modules/isomorphic-git/index.cjs:1750:22)
    at async _getConfig (/Users/itsame/.npm/_npx/3c737cbb02d79cc9/node_modules/isomorphic-git/index.cjs:5397:18)
    at async normalizeAuthorObject (/Users/itsame/.npm/_npx/3c737cbb02d79cc9/node_modules/isomorphic-git/index.cjs:5407:19)
    at async Object.pull (/Users/itsame/.npm/_npx/3c737cbb02d79cc9/node_modules/isomorphic-git/index.cjs:11682:20)
    at async Dalai.add (/Users/itsame/.npm/_npx/3c737cbb02d79cc9/node_modules/dalai/index.js:394:7)
    at async Dalai.install (/Users/itsame/.npm/_npx/3c737cbb02d79cc9/node_modules/dalai/index.js:346:5) {
  caller: 'git.pull'
}
try cloning /Users/itsame/dalai/alpaca https://github.com/ItsPi3141/alpaca.cpp
next alpaca [AsyncFunction: make]
exec: make in /Users/itsame/dalai/alpaca
make
exit

The default interactive shell is now zsh.
To update your account to use zsh, please run `chsh -s /bin/zsh`.
For more details, please visit https://support.apple.com/kb/HT208050.
bash-3.2$ make
I llama.cpp build info:
I UNAME_S:  Darwin
I UNAME_P:  arm
I UNAME_M:  arm64
I CFLAGS:   -I.              -O3 -DNDEBUG -std=c11   -fPIC -pthread -DGGML_USE_ACCELERATE
I CXXFLAGS: -I. -I./examples -O3 -DNDEBUG -std=c++11 -fPIC -pthread
I LDFLAGS:   -framework Accelerate
I CC:       Apple clang version 14.0.0 (clang-1400.0.29.202)
I CXX:      Apple clang version 14.0.0 (clang-1400.0.29.202)

cc  -I.              -O3 -DNDEBUG -std=c11   -fPIC -pthread -DGGML_USE_ACCELERATE   -c ggml.c -o ggml.o
c++ -I. -I./examples -O3 -DNDEBUG -std=c++11 -fPIC -pthread -c utils.cpp -o utils.o
c++ -I. -I./examples -O3 -DNDEBUG -std=c++11 -fPIC -pthread main.cpp ggml.o utils.o -o main  -framework Accelerate
./main -h
usage: ./main [options]

options:
  -h, --help            show this help message and exit
  -i, --interactive     run in interactive mode
  --interactive-start   run in interactive mode and poll user input at startup
  -r PROMPT, --reverse-prompt PROMPT
                        in interactive mode, poll user input upon seeing PROMPT
  --color               colorise output to distinguish prompt and user input from generations
  -s SEED, --seed SEED  RNG seed (default: -1)
  -t N, --threads N     number of threads to use during computation (default: 4)
  -p PROMPT, --prompt PROMPT
                        prompt to start generation with (default: random)
  -f FNAME, --file FNAME
                        prompt file to start generation.
  -n N, --n_predict N   number of tokens to predict (default: 128)
  --top_k N             top-k sampling (default: 40)
  --top_p N             top-p sampling (default: 0.9)
  --repeat_last_n N     last n tokens to consider for penalize (default: 64)
  --repeat_penalty N    penalize repeat sequence of tokens (default: 1.3)
  -c N, --ctx_size N    size of the prompt context (default: 2048)
  --temp N              temperature (default: 0.1)
  -b N, --batch_size N  batch size for prompt processing (default: 8)
  -m FNAME, --model FNAME
                        model path (default: ggml-alpaca-7b-q4.bin)

c++ -I. -I./examples -O3 -DNDEBUG -std=c++11 -fPIC -pthread quantize.cpp ggml.o utils.o -o quantize  -framework Accelerate
bash-3.2$ exit
exit
alpaca.add [ '13B' ]
dir /Users/itsame/dalai/alpaca/models/13B
downloading torrent
ERROR AxiosError: Request failed with status code 404======================================================>] done
    at settle (/Users/itsame/.npm/_npx/3c737cbb02d79cc9/node_modules/axios/dist/node/axios.cjs:1900:12)
    at RedirectableRequest.handleResponse (/Users/itsame/.npm/_npx/3c737cbb02d79cc9/node_modules/axios/dist/node/axios.cjs:2900:9)
    at RedirectableRequest.emit (node:events:527:28)
    at RedirectableRequest._processResponse (/Users/itsame/.npm/_npx/3c737cbb02d79cc9/node_modules/follow-redirects/index.js:356:10)
    at RedirectableRequest._onNativeResponse (/Users/itsame/.npm/_npx/3c737cbb02d79cc9/node_modules/follow-redirects/index.js:62:10)
    at Object.onceWrapper (node:events:642:26)
    at ClientRequest.emit (node:events:527:28)
    at HTTPParser.parserOnIncomingClient (node:_http_client:639:27)
    at HTTPParser.parserOnHeadersComplete (node:_http_common:117:17)
    at TLSSocket.socketOnData (node:_http_client:502:22) {
  code: 'ERR_BAD_REQUEST',
  config: {
    transitional: {
      silentJSONParsing: true,
      forcedJSONParsing: true,
      clarifyTimeoutError: false
    },
    adapter: [ 'xhr', 'http' ],
    transformRequest: [ [Function: transformRequest] ],
    transformResponse: [ [Function: transformResponse] ],
    timeout: 0,
    xsrfCookieName: 'XSRF-TOKEN',
    xsrfHeaderName: 'X-XSRF-TOKEN',
    maxContentLength: Infinity,
    maxBodyLength: -1,
    env: { FormData: [Function], Blob: [class Blob] },
    validateStatus: [Function: validateStatus],
    headers: AxiosHeaders {
      Accept: 'application/json, text/plain, */*',
      'User-Agent': 'axios/1.3.4',
      'Accept-Encoding': 'gzip, compress, deflate, br'
    },
    url: 'https://huggingface.co/Pi3141/alpaca-13B-ggml/resolve/main/ggml-model-q4_0.bin',
    method: 'get',
    responseType: 'stream',
    onDownloadProgress: [Function: onDownloadProgress],
    data: undefined
  },
  request: <ref *1> ClientRequest {
    _events: [Object: null prototype] {
      abort: [Function (anonymous)],
      aborted: [Function (anonymous)],
      connect: [Function (anonymous)],
      error: [Function (anonymous)],
      socket: [Function (anonymous)],
      timeout: [Function (anonymous)],
      prefinish: [Function: requestOnPrefinish]
    },
    _eventsCount: 7,
    _maxListeners: undefined,
    outputData: [],
    outputSize: 0,
    writable: true,
    destroyed: false,
    _last: true,
    chunkedEncoding: false,
    shouldKeepAlive: false,
    maxRequestsOnConnectionReached: false,
    _defaultKeepAlive: true,
    useChunkedEncodingByDefault: false,
    sendDate: false,
    _removedConnection: false,
    _removedContLen: false,
    _removedTE: false,
    _contentLength: 0,
    _hasBody: true,
    _trailer: '',
    finished: true,
    _headerSent: true,
    _closed: false,
    socket: TLSSocket {
      _tlsOptions: [Object],
      _secureEstablished: true,
      _securePending: false,
      _newSessionPending: false,
      _controlReleased: true,
      secureConnecting: false,
      _SNICallback: null,
      servername: 'huggingface.co',
      alpnProtocol: false,
      authorized: true,
      authorizationError: null,
      encrypted: true,
      _events: [Object: null prototype],
      _eventsCount: 10,
      connecting: false,
      _hadError: false,
      _parent: null,
      _host: 'huggingface.co',
      _readableState: [ReadableState],
      _maxListeners: undefined,
      _writableState: [WritableState],
      allowHalfOpen: false,
      _sockname: null,
      _pendingData: null,
      _pendingEncoding: '',
      server: undefined,
      _server: null,
      ssl: [TLSWrap],
      _requestCert: true,
      _rejectUnauthorized: true,
      parser: null,
      _httpMessage: [Circular *1],
      [Symbol(res)]: [TLSWrap],
      [Symbol(verified)]: true,
      [Symbol(pendingSession)]: null,
      [Symbol(async_id_symbol)]: 1615,
      [Symbol(kHandle)]: [TLSWrap],
      [Symbol(lastWriteQueueSize)]: 0,
      [Symbol(timeout)]: null,
      [Symbol(kBuffer)]: null,
      [Symbol(kBufferCb)]: null,
      [Symbol(kBufferGen)]: null,
      [Symbol(kCapture)]: false,
      [Symbol(kSetNoDelay)]: false,
      [Symbol(kSetKeepAlive)]: true,
      [Symbol(kSetKeepAliveInitialDelay)]: 60,
      [Symbol(kBytesRead)]: 0,
      [Symbol(kBytesWritten)]: 0,
      [Symbol(connect-options)]: [Object]
    },
    _header: 'GET /Pi3141/alpaca-native-13B-ggml/resolve/main/ggml-model-q4_0.bin HTTP/1.1\r\n' +
      'Accept: application/json, text/plain, */*\r\n' +
      'User-Agent: axios/1.3.4\r\n' +
      'Accept-Encoding: gzip, compress, deflate, br\r\n' +
      'Host: huggingface.co\r\n' +
      'Connection: close\r\n' +
      '\r\n',
    _keepAliveTimeout: 0,
    _onPendingData: [Function: nop],
    agent: Agent {
      _events: [Object: null prototype],
      _eventsCount: 2,
      _maxListeners: undefined,
      defaultPort: 443,
      protocol: 'https:',
      options: [Object: null prototype],
      requests: [Object: null prototype] {},
      sockets: [Object: null prototype],
      freeSockets: [Object: null prototype] {},
      keepAliveMsecs: 1000,
      keepAlive: false,
      maxSockets: Infinity,
      maxFreeSockets: 256,
      scheduling: 'lifo',
      maxTotalSockets: Infinity,
      totalSocketCount: 1,
      maxCachedSessions: 100,
      _sessionCache: [Object],
      [Symbol(kCapture)]: false
    },
    socketPath: undefined,
    method: 'GET',
    maxHeaderSize: undefined,
    insecureHTTPParser: undefined,
    path: '/Pi3141/alpaca-native-13B-ggml/resolve/main/ggml-model-q4_0.bin',
    _ended: true,
    res: IncomingMessage {
      _readableState: [ReadableState],
      _events: [Object: null prototype],
      _eventsCount: 4,
      _maxListeners: undefined,
      socket: [TLSSocket],
      httpVersionMajor: 1,
      httpVersionMinor: 1,
      httpVersion: '1.1',
      complete: true,
      rawHeaders: [Array],
      rawTrailers: [],
      aborted: false,
      upgrade: false,
      url: '',
      method: null,
      statusCode: 404,
      statusMessage: 'Not Found',
      client: [TLSSocket],
      _consuming: false,
      _dumped: false,
      req: [Circular *1],
      responseUrl: 'https://huggingface.co/Pi3141/alpaca-native-13B-ggml/resolve/main/ggml-model-q4_0.bin',
      redirects: [],
      [Symbol(kCapture)]: false,
      [Symbol(kHeaders)]: [Object],
      [Symbol(kHeadersCount)]: 30,
      [Symbol(kTrailers)]: null,
      [Symbol(kTrailersCount)]: 0
    },
    aborted: false,
    timeoutCb: null,
    upgradeOrConnect: false,
    parser: null,
    maxHeadersCount: null,
    reusedSocket: false,
    host: 'huggingface.co',
    protocol: 'https:',
    _redirectable: Writable {
      _writableState: [WritableState],
      _events: [Object: null prototype],
      _eventsCount: 3,
      _maxListeners: undefined,
      _options: [Object],
      _ended: true,
      _ending: true,
      _redirectCount: 1,
      _redirects: [],
      _requestBodyLength: 0,
      _requestBodyBuffers: [],
      _onNativeResponse: [Function (anonymous)],
      _currentRequest: [Circular *1],
      _currentUrl: 'https://huggingface.co/Pi3141/alpaca-native-13B-ggml/resolve/main/ggml-model-q4_0.bin',
      _isRedirect: true,
      [Symbol(kCapture)]: false
    },
    [Symbol(kCapture)]: false,
    [Symbol(kNeedDrain)]: false,
    [Symbol(corked)]: 0,
    [Symbol(kOutHeaders)]: [Object: null prototype] {
      accept: [Array],
      'user-agent': [Array],
      'accept-encoding': [Array],
      host: [Array]
    },
    [Symbol(kUniqueHeaders)]: null
  },
  response: {
    status: 404,
    statusText: 'Not Found',
    headers: AxiosHeaders {
      date: 'Sun, 02 Apr 2023 14:17:32 GMT',
      'content-type': 'text/plain; charset=utf-8',
      'content-length': '15',
      connection: 'close',
      server: 'nginx',
      'x-powered-by': 'huggingface-moon',
      'x-request-id': 'Root=1-64298e7c-43ed4f5b1545d4327d062971',
      'access-control-allow-origin': 'https://huggingface.co',
      vary: 'Origin',
      'access-control-expose-headers': 'X-Repo-Commit,X-Request-Id,X-Error-Code,X-Error-Message,ETag,Link,Accept-Ranges,Content-Range',
      'x-repo-commit': '6535ce6324cf35d2f9b1a49ce74f0a78201a994a',
      'x-error-code': 'EntryNotFound',
      'x-error-message': 'Entry not found',
      etag: 'W/"f-mY2VvLxuxB7KhsoOdQTlMTccuAQ"',
      'strict-transport-security': 'max-age=31536000; includeSubDomains'
    },
    config: {
      transitional: [Object],
      adapter: [Array],
      transformRequest: [Array],
      transformResponse: [Array],
      timeout: 0,
      xsrfCookieName: 'XSRF-TOKEN',
      xsrfHeaderName: 'X-XSRF-TOKEN',
      maxContentLength: Infinity,
      maxBodyLength: -1,
      env: [Object],
      validateStatus: [Function: validateStatus],
      headers: [AxiosHeaders],
      url: 'https://huggingface.co/Pi3141/alpaca-13B-ggml/resolve/main/ggml-model-q4_0.bin',
      method: 'get',
      responseType: 'stream',
      onDownloadProgress: [Function: onDownloadProgress],
      data: undefined
    },
    request: <ref *1> ClientRequest {
      _events: [Object: null prototype],
      _eventsCount: 7,
      _maxListeners: undefined,
      outputData: [],
      outputSize: 0,
      writable: true,
      destroyed: false,
      _last: true,
      chunkedEncoding: false,
      shouldKeepAlive: false,
      maxRequestsOnConnectionReached: false,
      _defaultKeepAlive: true,
      useChunkedEncodingByDefault: false,
      sendDate: false,
      _removedConnection: false,
      _removedContLen: false,
      _removedTE: false,
      _contentLength: 0,
      _hasBody: true,
      _trailer: '',
      finished: true,
      _headerSent: true,
      _closed: false,
      socket: [TLSSocket],
      _header: 'GET /Pi3141/alpaca-native-13B-ggml/resolve/main/ggml-model-q4_0.bin HTTP/1.1\r\n' +
        'Accept: application/json, text/plain, */*\r\n' +
        'User-Agent: axios/1.3.4\r\n' +
        'Accept-Encoding: gzip, compress, deflate, br\r\n' +
        'Host: huggingface.co\r\n' +
        'Connection: close\r\n' +
        '\r\n',
      _keepAliveTimeout: 0,
      _onPendingData: [Function: nop],
      agent: [Agent],
      socketPath: undefined,
      method: 'GET',
      maxHeaderSize: undefined,
      insecureHTTPParser: undefined,
      path: '/Pi3141/alpaca-native-13B-ggml/resolve/main/ggml-model-q4_0.bin',
      _ended: true,
      res: [IncomingMessage],
      aborted: false,
      timeoutCb: null,
      upgradeOrConnect: false,
      parser: null,
      maxHeadersCount: null,
      reusedSocket: false,
      host: 'huggingface.co',
      protocol: 'https:',
      _redirectable: [Writable],
      [Symbol(kCapture)]: false,
      [Symbol(kNeedDrain)]: false,
      [Symbol(corked)]: 0,
      [Symbol(kOutHeaders)]: [Object: null prototype],
      [Symbol(kUniqueHeaders)]: null
    },
    data: AxiosTransformStream {
      _readableState: [ReadableState],
      _events: [Object: null prototype],
      _eventsCount: 7,
      _maxListeners: undefined,
      _writableState: [WritableState],
      allowHalfOpen: true,
      [Symbol(kCapture)]: false,
      [Symbol(kCallback)]: null,
      [Symbol(internals)]: [Object]
    }
  }
}
@jke-cs
Copy link

jke-cs commented Apr 2, 2023

same issue

@nothingface0
Copy link

It's trying to find the 13B params in the following link: https://huggingface.co/Pi3141/alpaca-native-13B-ggml/resolve/main/ggml-model-q4_0.bin

while the correct one is: https://huggingface.co/Pi3141/alpaca-native-13B-ggml/resolve/main/ggml-model-q4_1.bin (notice the 1 in the filename).

I'm currently downloading the file manually from huggingface, in order to store it into <your dalai path here>/alpaca/models/13B/ and see if it works.

@smobanan
Copy link

smobanan commented Apr 2, 2023

encountering this too, it's because its trying to download the model from
https://huggingface.co/Pi3141/alpaca-13B-ggml/resolve/main/ggml-model-q4_0.bin
but the new link is
https://huggingface.co/Pi3141/alpaca-13B-ggml/resolve/main/ggml-model-q4_1.bin

trying to figure out where to change the link in the code but i have no idea what im doing lol

@nothingface0
Copy link

nothingface0 commented Apr 2, 2023

trying to figure out where to change the link in the code but i have no idea what im doing lol

Should be here.

Your local file will be under: ~/.npm/_npx/<something>/node_modules/dalai/alpaca.js.

If you've already downloaded the correct .bin file, you just need to move it to <your dalai path here>/alpaca/models/13B/, rename it to ggml-model-q4_0.bin and restart the serve command.

I didn't look much into it, but downloading the correct bin file manually and placing it into the models folder will NOT do the trick.
However if you alter the ~/.npm/_npx/<something>/node_modules/dalai/alpaca.js file to point to the correct URL for 13B, it should work, until an an updated is pushed.

@madnight
Copy link

madnight commented Apr 2, 2023

Same issue over here. Any timeline for a fixed release?

@MichaelCharles
Copy link
Author

MichaelCharles commented Apr 3, 2023

I've created a pull request that should solve this issue.

In the meantime, if you want to use my fork with the updated url, you can do npm install --global git://github.com/mcaubrey/dalai.git. From there if you run dalai alpaca install 13B, it should install the correct model.

If you already have dalai installed globally, you may need to npm remove --global dalai first.

Remember to switch back to the main repository after this is fixed. I don't plan on keeping my fork updated or anything like that.

@Resmond-s

This comment was marked as outdated.

@MichaelCharles
Copy link
Author

I just wanted to say that despite my fix downloading the correct bin file, and despite it properly attempting to use that file, it is still resulting in an error. I've discussed this over on the pull request but I'll put it here too.

I've checked that the SHA256 is matching and so it has correctly downloaded the whole file. When I run the web app in debug mode the output is like this,

/Users/michaelaubrey/dalai/alpaca/main --seed -1 --threads 8 --n_predict 200 --model /Users/michaelaubrey/dalai/alpaca/models/13B/ggml-model-q4_1.bin --top_k 40 --top_p 0.1 --temp 0.1 --repeat_last_n 64 --repeat_penalty 1.3 -p "Once upon a time there lived a girl named Darla, and she
"
exit

The default interactive shell is now zsh.
To update your account to use zsh, please run `chsh -s /bin/zsh`.
For more details, please visit https://support.apple.com/kb/HT208050.
bash-3.2$ /Users/michaelaubrey/dalai/alpaca/main --seed -1 --threads 8 --n_predict 200 --model /Users/michaelaubrey/dalai/alpaca/models/13B/ggml-model-q4_1.bin --top_k 40 --top_p 0.1 --temp 0.1 --repeat_last_n 64 --repeat_penalty 1.3 -p "Once upon a time there lived a girl named Darla, and she
> "
main: seed = 1680573720
llama_model_load: loading model from '/Users/michaelaubrey/dalai/alpaca/models/13B/ggml-model-q4_1.bin' - please wait ...
llama_model_load: invalid model file '/Users/michaelaubrey/dalai/alpaca/models/13B/ggml-model-q4_1.bin' (bad magic)
main: failed to load model from '/Users/michaelaubrey/dalai/alpaca/models/13B/ggml-model-q4_1.bin'
bash-3.2$ exit
exit

@billsecond
Copy link

I have tried to download the 13B model by renaming the file with a _1 in the alpaca.js file, it downloaded the correct file, then I renamed the bin file to _0, it finally showed up on the dalai gui but it didn't respond to my prompts. What am I missing?

@nothingface0
Copy link

nothingface0 commented Apr 5, 2023

I have tried to download the 13B model by renaming the file with a _1 in the alpaca.js file, it downloaded the correct file, then I renamed the bin file to _0, it finally showed up on the dalai gui but it didn't respond to my prompts. What am I missing?

@billsecond As mentioned by @mcaubrey, the file is probably in a different format than the one supported by llama.ccp, hence the llama_model_load: invalid model file '/Users/michaelaubrey/dalai/alpaca/models/13B/ggml-model-q4_1.bin' (bad magic) error.

From my understanding, we have to wait for an update to support the specific file type. I may be mistaken though.

Edit: May it be related to this?

@MichaelCharles
Copy link
Author

It might be possible to download and then convert the file ourselves, I haven't play with it at all yet, but then we'd need a place to host it.

It's probably better to wait until someone already hosting these models figures this out and provides a way to retrieve them.

@jcpsimmons
Copy link

Is this still open? When I try @mcaubrey 's branch I get

npm ERR! git dep preparation failed

Am I understanding correctly that it's simply a mis-named string in the library? Is there a PR open?

@BMoradi1
Copy link

Still broken.

@mcaubrey

I get an error with your repo

@Minebot17
Copy link

Minebot17 commented Apr 14, 2023

I successfully launched alpaca 13B on windows 10, manually for now.

  1. You need to download the original pth model and the params file and put them to the 13B folder
    https://huggingface.co/Pi3141/alpaca-native-13B-ggml/blob/main/consolidated.00.pth
    https://huggingface.co/Pi3141/alpaca-native-13B-ggml/blob/main/params.json
  2. Open a console and cd to your dalai home, run install 13B model for executables build if you didn't run it once
  3. Run the pth-ggml converter from dalai's venv env: ./venv/Scripts/python ./alpaca/convert-pth-to-ggml.py ./alpaca/models/13B 1 0
  4. Run the quntize.exe with the old quantize format: ./alpaca/build/Release/quantize.exe E:/dalai/alpaca/models/13B/ggml-model-f16.bin E:/dalai/alpaca/models/13B/ggml-model-q4_0.bin 2
  5. Type npx dalai serve and test that model works fine

@hchenphd
Copy link

trying to figure out where to change the link in the code but i have no idea what im doing lol

Should be here.

Your local file will be under: ~/.npm/_npx/<something>/node_modules/dalai/alpaca.js.

If you've already downloaded the correct .bin file, you just need to move it to <your dalai path here>/alpaca/models/13B/, rename it to ggml-model-q4_0.bin and restart the serve command.

I didn't look much into it, but downloading the correct bin file manually and placing it into the models folder will NOT do the trick. However if you alter the ~/.npm/_npx/<something>/node_modules/dalai/alpaca.js file to point to the correct URL for 13B, it should work, until an an updated is pushed.

change the alpaca.js file works

@dejanr92
Copy link

dejanr92 commented Apr 15, 2023

Hey guys I fixed this issue by using the torrent download instead of the file link

Steps to fix

Edit the local file: ~/.npm/_npx/<randomhash>/node_modules/dalai/alpaca.js
use your latest cached version in randomhash

change the section on line 91 from

case "13B":
            /*
            await this.root.torrent.add('magnet:?xt=urn:btih:053b3d54d2e77ff020ebddf51dad681f2a651071&dn=ggml-alpaca-13b-q4.bin&tr=udp%3A%2F%2Ftracker.opentrackr.org%3A1337%2Fannounce&tr=udp%3A%2F%2Fopentracker.i2p.rocks%3A6969%2Fannounce&tr=udp%3A%2F%2Ftracker.openbittorrent.com%3A6969%2Fannounce&tr=udp%3A%2F%2F9.rarbg.com%3A2810%2Fannounce', dir)
            console.log("renaming")
            await fs.promises.rename(
              path.resolve(dir, "ggml-alpaca-13b-q4.bin"),
              path.resolve(dir, "ggml-model-q4_0.bin")
            )
            */
            url = "https://huggingface.co/Pi3141/alpaca-13B-ggml/resolve/main/ggml-model-q4_0.bin"
            await this.root.down(url, path.resolve(dir, "ggml-model-q4_0.bin"))
            break;

to

case "13B":
            await this.root.torrent.add('magnet:?xt=urn:btih:053b3d54d2e77ff020ebddf51dad681f2a651071&dn=ggml-alpaca-13b-q4.bin&tr=udp%3A%2F%2Ftracker.opentrackr.org%3A1337%2Fannounce&tr=udp%3A%2F%2Fopentracker.i2p.rocks%3A6969%2Fannounce&tr=udp%3A%2F%2Ftracker.openbittorrent.com%3A6969%2Fannounce&tr=udp%3A%2F%2F9.rarbg.com%3A2810%2Fannounce', dir)
            console.log("renaming")
            await fs.promises.rename(
              path.resolve(dir, "ggml-alpaca-13b-q4.bin"),
              path.resolve(dir, "ggml-model-q4_0.bin")
            )
            break;

rerun npx dalai alpaca install 13B

The torrent file works great, the only issue is if you don't want to use peer to peer file transfers for some reason.

@bernard-hossmoto
Copy link

Hey guys I fixed this issue by using the torrent download instead of the file link

Thanks, that fixed it for me.

@ndroftheline
Copy link

broken in docker. have attempted the torrent fix, will try to update once it's finished redownloading and converting.

docker exec -ti dalai-dalai-1 /bin/bash
vi /root/dalai/node_modules/dalai/alpaca.js
do as described by dejanr92, save
npx dalai alpaca install 13B

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests