Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Is it possible to run Open-Canvas on Linux fully-local using a local LangGraph CLI instead of LangGraph Studio cloud? #181

Open
PieBru opened this issue Nov 4, 2024 · 12 comments

Comments

@PieBru
Copy link

PieBru commented Nov 4, 2024

Said all in the title.
Thank you.

@lanesky
Copy link

lanesky commented Nov 6, 2024

I shared an approach for running Open-Canvas on Windows. This method might also work on Linux, so feel free to check it out—it could be useful! However, it still requires a LangSmith API key to push trace information to the cloud, unfortunately.

#114 (comment)

@PieBru
Copy link
Author

PieBru commented Nov 6, 2024

@lanesky Thank you.
I just tried with the staging branch, because it seems to be the preferred one for eventual PR.
I'm new with tools like Uvicorn & Co., thus some help will be appreciated to save me from entering the rabbit hole.

This is the log tail, it's on a Arch Linux LXC container on Proxmox VE 8.2.

2024-11-06T18:29:30.308511Z [info     ] Started server process [1]     [uvicorn.error] api_revision=f2b62eb api_variant=local color_message=Started server process [%d]
2024-11-06T18:29:30.308637Z [info     ] Waiting for application startup. [uvicorn.error] api_revision=f2b62eb api_variant=local
2024-11-06T18:29:30.308847Z [warning  ] No license key found, running in test mode with LangSmith API key. For production use, set LANGGRAPH_CLOUD_LICENSE_KEY in environment. [langgraph_license.validation] api_revision=f2b62eb api_variant=local
2024-11-06T18:29:30.753507Z [info     ] HTTP Request: GET https://api.smith.langchain.com/auth?langgraph-api=true "HTTP/1.1 200 OK" [httpx] api_revision=f2b62eb api_variant=local
2024-11-06T18:29:30.851005Z [warning  ] error connecting in 'pool-1': [Errno -2] Name or service not known [psycopg.pool] api_revision=f2b62eb api_variant=local
2024-11-06T18:29:31.931555Z [warning  ] error connecting in 'pool-1': [Errno -2] Name or service not known [psycopg.pool] api_revision=f2b62eb api_variant=local
2024-11-06T18:29:34.091808Z [warning  ] error connecting in 'pool-1': [Errno -2] Name or service not known [psycopg.pool] api_revision=f2b62eb api_variant=local
2024-11-06T18:29:38.409794Z [warning  ] error connecting in 'pool-1': [Errno -2] Name or service not known [psycopg.pool] api_revision=f2b62eb api_variant=local
2024-11-06T18:29:47.041806Z [warning  ] error connecting in 'pool-1': [Errno -2] Name or service not known [psycopg.pool] api_revision=f2b62eb api_variant=local
2024-11-06T18:30:00.804040Z [error    ] Traceback (most recent call last):
  File "/usr/local/lib/python3.12/site-packages/starlette/routing.py", line 693, in lifespan
    async with self.lifespan_context(app) as maybe_state:
               ^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.12/contextlib.py", line 210, in __aenter__
    return await anext(self.gen)
           ^^^^^^^^^^^^^^^^^^^^^
  File "/api/langgraph_api/lifespan.py", line 29, in lifespan
  File "/usr/local/lib/python3.12/site-packages/langgraph_storage/database.py", line 113, in start_pool
    await _pg_pool.open(wait=True)
  File "/usr/local/lib/python3.12/site-packages/psycopg_pool/pool_async.py", line 383, in open
    await self.wait(timeout=timeout)
  File "/usr/local/lib/python3.12/site-packages/psycopg_pool/pool_async.py", line 170, in wait
    raise PoolTimeout(f"pool initialization incomplete after {timeout} sec")
psycopg_pool.PoolTimeout: pool initialization incomplete after 30.0 sec
 [uvicorn.error] api_revision=f2b62eb api_variant=local
2024-11-06T18:30:00.804185Z [error    ] Application startup failed. Exiting. [uvicorn.error] api_revision=f2b62eb api_variant=local

These are the installation commands, starting from a clean ArchLinux-base template with the admin user added and ssh enabled.

# Install prerequisites
sudo pacman -S git wget less python python-pip docker
sudo systemctl enable --now docker
sudo systemctl status docker

# Clone repo
cd && mkdir Github && cd Github
git clone https://github.com/langchain-ai/open-canvas/

# Update repo
cd && cd Github/open-canvas
git branch
git switch staging
git branch
git pull

# Install LangGraph-CLI
python -m venv .venv
source .venv/bin/activate
pip install --upgrade pip
pip install -U langgraph-cli
langgraph dockerfile -c langgraph.json Dockerfile
docker build -t open-canvas .

# Install & run Redis: https://hub.docker.com/_/redis/
sudo docker run --name some-redis -d redis redis-server --save 60 1 --loglevel warning

# Install & run Postgres: https://hub.docker.com/_/postgres
sudo docker run --name some-postgres -e POSTGRES_PASSWORD=mysecretpassword -d postgres

# Install & run Open-Canvas
touch .env
# Production: export LANGGRAPH_CLOUD_LICENSE_KEY="***************"
docker run \
    --env-file .env \
    -p 57318:8000 \
    -e REDIS_URI="redis://host.docker.internal:6379/0" \
    -e DATABASE_URI="postgresql://myuser:[email protected]:5432/mydatabase" \
    -e LANGSMITH_API_KEY="lsv2_pt_*******************" \
    open-canvas

To eventually replicate exactly, this is my test LXC housekeeping before the above installation:

# On the PVE server, "Create CT" using archlinux-base template
# RAM=4GB+, CPU=1+, DISK=16GB+

# LXC  housekeeping, login as root:
pacman-key --init && pacman-key --populate
pacman -Sy archlinux-keyring --noconfirm && pacman -Su --noconfirm
sync && reboot

# Add admin account
pacman -S sudo --noconfirm		# In Debian: apt install sudo
useradd --create-home admin
passwd admin
usermod -aG wheel admin			# In Debian: use sudo instead of wheel

visudo
  # Uncomment to allow members of group wheel to execute any command
  %wheel ALL=(ALL:ALL) ALL		# In Debian: it's %sudo instead of %wheel

# Enable sshd
nano /etc/ssh/sshd_config
	PasswordAuthentication yes
systemctl enable --now sshd

# Get the IP address
ip a | grep 'inet ' 
echo $HOSTNAME

# Access via ssh from another computer
$ ssh admin@<hostname_or_ip>

sudo nano /etc/pacman.conf
  # Misc options
  #UseSyslog
  Color
  ILoveCandy
  #NoProgressBar
  #CheckSpace
  VerbosePkgLists
  ParallelDownloads = 5
  
sudo poweroff
# Backup now the fresh installation,
# then start the LXC.

@lanesky
Copy link

lanesky commented Nov 6, 2024

@PieBru Welcome!

Based on the error you posted, it looks like this is likely caused by a name resolution issue — meaning the connection to PostgreSQL failed because the hostname couldn’t be resolved.

2024-11-06T18:29:30.851005Z [warning  ] error connecting in 'pool-1': [Errno -2] Name or service not known [psycopg.pool] api_revision=f2b62eb api_variant=local
2024-11-06T18:29:31.931555Z [warning  ] error connecting in 'pool-1': [Errno -2] Name or service not known [psycopg.pool] api_revision=f2b62eb api_variant=local
2024-11-06T18:29:34.091808Z [warning  ] error connecting in 'pool-1': [Errno -2] Name or service not known [psycopg.pool] api_revision=f2b62eb api_variant=local
2024-11-06T18:29:38.409794Z [warning  ] error connecting in 'pool-1': [Errno -2] Name or service not known [psycopg.pool] api_revision=f2b62eb api_variant=local
2024-11-06T18:29:47.041806Z [warning  ] error connecting in 'pool-1': [Errno -2] Name or service not known [psycopg.pool] api_revision=f2b62eb api_variant=local
2024-11-06T18:30:00.804040Z [error    ] Traceback (most recent call last):
...
...
  File "/usr/local/lib/python3.12/site-packages/langgraph_storage/database.py", line 113, in start_pool
    await _pg_pool.open(wait=True)
...
...

Looking at the install commands you posted:

# Install & run Redis: https://hub.docker.com/_/redis/
sudo docker run --name some-redis -d redis redis-server --save 60 1 --loglevel warning

# Install & run Postgres: https://hub.docker.com/_/postgres
sudo docker run --name some-postgres -e POSTGRES_PASSWORD=mysecretpassword -d postgres

I think the names for Redis and PostgreSQL in your environment variables should be some-redis and some-postgres respectively.

docker run \
    --env-file .env \
    -p 57318:8000 \
    -e REDIS_URI="redis://some-redis:6379/0" \
    -e DATABASE_URI="postgresql://postgres:mysecretpassword@some-postgres:5432/postgres" \
    -e LANGSMITH_API_KEY="lsv2_pt_*******************" \
    open-canvas

@dashan1127
Copy link

dashan1127 commented Nov 12, 2024

Brother, help, I used your method to start docker on linux, and an error occurred when starting open-canvas image. I used langsmith's personal key

root@pekpheds01309:/deps/open-canvas# docker logs 60e67404cdd4
Emulate Docker CLI using podman. Create /etc/containers/nodocker to quiet msg.
2024-11-12T00:53:24.371303Z [info ] Started server process [1] [uvicorn.error] api_revision=2f1052b api_variant=local color_message=Started server process [%d]
2024-11-12T00:53:24.371498Z [info ] Waiting for application startup. [uvicorn.error] api_revision=2f1052b api_variant=local
2024-11-12T00:53:24.371831Z [warning ] No license key found, running in test mode with LangSmith API key. For production use, set LANGGRAPH_CLOUD_LICENSE_KEY in environment. [langgraph_license.validation] api_revision=2f1052b api_variant=local
2024-11-12T00:53:25.553440Z [info ] HTTP Request: GET https://apigw-beta.test.xxx.com/auth?langgraph-api=true "HTTP/1.1 200 " [httpx] api_revision=2f1052b api_variant=local
2024-11-12T00:53:25.611809Z [error ] Traceback (most recent call last):
File "/usr/local/lib/python3.12/site-packages/starlette/routing.py", line 693, in lifespan
async with self.lifespan_context(app) as maybe_state:
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/contextlib.py", line 210, in aenter
return await anext(self.gen)
^^^^^^^^^^^^^^^^^^^^^
File "/api/langgraph_api/lifespan.py", line 29, in lifespan
File "/usr/local/lib/python3.12/site-packages/langgraph_storage/database.py", line 115, in start_pool
await migrate()
File "/usr/local/lib/python3.12/site-packages/langgraph_storage/database.py", line 97, in migrate
if version <= current_version:
^^^^^^^^^^^^^^^^^^^^^^^^^^
TypeError: '<=' not supported between instances of 'int' and 'str'
[uvicorn.error] api_revision=2f1052b api_variant=local
2024-11-12T00:53:25.612057Z [error ] Application startup failed. Exiting. [uvicorn.error] api_revision=2f1052b api_variant=local

@lanesky
Copy link

lanesky commented Nov 12, 2024

@dashan1127

It seems that the API revision you’re running is different from mine. The API revision that worked fine in my environment was 4da016a, while yours is 2f1052b. I suspect the issue may be related to the base image, specifically langchain/langgraphjs-api:20. I also noticed that this image on Docker Hub has been updating frequently, so you might want to try using langchain/langgraphjs-api:20-4da016a, which matches the API revision that has been working well in my environment.

Here is the log from my environment."

2024-11-12T05:46:49.017145Z [info     ] Started server process [1]     [uvicorn.error] api_revision=4da016a api_variant=local color_message=Started server process [%d]
2024-11-12T05:46:49.017285Z [info     ] Waiting for application startup. [uvicorn.error] api_revision=4da016a api_variant=local
2024-11-12T05:46:49.017490Z [warning  ] No license key found, running in test mode with LangSmith API key. For production use, set LANGGRAPH_CLOUD_LICENSE_KEY in environment. [langgraph_license.validation] api_revision=4da016a api_variant=local
2024-11-12T05:46:49.336546Z [info     ] HTTP Request: GET https://api.smith.langchain.com/auth?langgraph-api=true "HTTP/1.1 200 OK" [httpx] api_revision=4da016a api_variant=local
2024-11-12T05:46:49.390290Z [info     ] Postgres pool stats            [langgraph_storage.database] api_revision=4da016a api_variant=local connections_ms=30 connections_num=1 pool_available=1 pool_max=150 pool_min=1 pool_size=1 requests_num=1 requests_waiting=0 usage_ms=2
2024-11-12T05:46:49.446827Z [info     ] Started server process [1]     [uvicorn.error] api_revision=4da016a api_variant=local color_message=Started server process [%d]
2024-11-12T05:46:49.446973Z [info     ] Redis pool stats               [langgraph_storage.redis] api_revision=4da016a api_variant=local idle_connections=1 in_use_connections=0 max_connections=500
2024-11-12T05:46:49.447773Z [info     ] Waiting for application startup. [uvicorn.error] api_revision=4da016a api_variant=local

@dashan1127
Copy link

Brother, I have changed it to langchain/langgraphjs-api:20-4da016a and re-mirrored it, but the same error still occurs.

podman start -a open-canvas_langgraph-redis_1
[langgraph-redis] | 1:C 12 Nov 2024 09:10:38.038 # oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo
[langgraph-redis] | 1:C 12 Nov 2024 09:10:38.038 # Redis version=6.2.16, bits=64, commit=00000000, modified=0, pid=1, just started
[langgraph-redis] | 1:C 12 Nov 2024 09:10:38.038 # Warning: no config file specified, using the default config. In order to specify a config file use redis-server /path/to/redis.conf
[langgraph-redis] | 1:M 12 Nov 2024 09:10:38.038 * monotonic clock: POSIX clock_gettime
[langgraph-redis] | 1:M 12 Nov 2024 09:10:38.040 * Running mode=standalone, port=6379.
[langgraph-redis] | 1:M 12 Nov 2024 09:10:38.040 # Server initialized
[langgraph-redis] | 1:M 12 Nov 2024 09:10:38.040 # WARNING Memory overcommit must be enabled! Without it, a background save or replication may fail under low memory condition. Being disabled, it can can also cause failures without low memory condition, see jemalloc/jemalloc#1328. To fix this issue add 'vm.overcommit_memory = 1' to /etc/sysctl.conf and then reboot or run the command 'sysctl vm.overcommit_memory=1' for this to take effect.
[langgraph-redis] | 1:M 12 Nov 2024 09:10:38.040 * Ready to accept connections
podman start -a open-canvas_langgraph-postgres_1
[langgraph-postgres] |
[langgraph-postgres] | PostgreSQL Database directory appears to contain a database; Skipping initialization
[langgraph-postgres] |
2024-11-12 09:10:39.514 UTC [1] LOG: starting PostgreSQL 16.4 (Debian 16.4-1.pgdg120+2) on x86_64-pc-linux-gnu, compiled by gcc (Debian 12.2.0-14) 12.2.0, 64-bit
2024-11-12 09:10:39.515 UTC [1] LOG: listening on IPv4 address "0.0.0.0", port 5432
2024-11-12 09:10:39.515 UTC [1] LOG: listening on IPv6 address "::", port 5432
2024-11-12 09:10:39.518 UTC [1] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432"
2024-11-12 09:10:39.524 UTC [27] LOG: database system was shut down at 2024-11-12 06:42:36 UTC
2024-11-12 09:10:39.577 UTC [1] LOG: database system is ready to accept connections
podman start -a open-canvas_langgraph-api_1
[langgraph-api] | 2024-11-12T09:10:42.686398Z [info ] Started server process [1] [uvicorn.error] api_revision=4da016a api_variant=local color_message=Started server process [%d]
[langgraph-api] | 2024-11-12T09:10:42.686620Z [info ] Waiting for application startup. [uvicorn.error] api_revision=4da016a api_variant=local
[langgraph-api] | 2024-11-12T09:10:42.687035Z [warning ] No license key found, running in test mode with LangSmith API key. For production use, set LANGGRAPH_CLOUD_LICENSE_KEY in environment. [langgraph_license.validation] api_revision=4da016a api_variant=local
[langgraph-api] | 2024-11-12T09:10:44.766037Z [info ] HTTP Request: GET https://apigw-beta.test.xxx.com/api/langsmith/auth?langgraph-api=true "HTTP/1.1 200 " [httpx] api_revision=4da016a api_variant=local
[langgraph-api] | 2024-11-12T09:10:44.881219Z [error ] Traceback (most recent call last):
[langgraph-api] | File "/usr/local/lib/python3.12/site-packages/starlette/routing.py", line 693, in lifespan
[langgraph-api] | async with self.lifespan_context(app) as maybe_state:
[langgraph-api] | ^^^^^^^^^^^^^^^^^^^^^^^^^^
[langgraph-api] | File "/usr/local/lib/python3.12/contextlib.py", line 210, in aenter
[langgraph-api] | return await anext(self.gen)
[langgraph-api] | ^^^^^^^^^^^^^^^^^^^^^
[langgraph-api] | File "/api/langgraph_api/lifespan.py", line 29, in lifespan
[langgraph-api] | File "/usr/local/lib/python3.12/site-packages/langgraph_storage/database.py", line 115, in start_pool
[langgraph-api] | await migrate()
[langgraph-api] | File "/usr/local/lib/python3.12/site-packages/langgraph_storage/database.py", line 97, in migrate
[langgraph-api] | if version <= current_version:
[langgraph-api] | ^^^^^^^^^^^^^^^^^^^^^^^^^^
[langgraph-api] | TypeError: '<=' not supported between instances of 'int' and 'str'
[langgraph-api] | [uvicorn.error] api_revision=4da016a api_variant=local
[langgraph-api] | 2024-11-12T09:10:44.881441Z [error ] Application startup failed. Exiting. [uvicorn.error] api_revision=4da016a api_variant=local
exit code: 3

@lanesky
Copy link

lanesky commented Nov 12, 2024

Sorry to hear that. What is the "https://apigw-beta.test.xxx.com/api/langsmith/auth?langgraph-api=true" in your log? Is it an api stub?

@dashan1127
Copy link

dashan1127 commented Nov 13, 2024

The container is deployed in the company's intranet. Accessing langchain requires a proxy. This address is accessible to the Auth authentication interface. What a strange problem. Brother, is there any other way?

auth-result

@lanesky
Copy link

lanesky commented Nov 13, 2024

I have reviewed the related source code (you can also access it by dumping storage\langgraph_storage\database.py from the container). The error seems to occur because the return value of the SQL statement SELECT version FROM schema_migrations ORDER BY version DESC LIMIT 1 was a string, whereas an integer was expected. Please check the schema_migrations table in your PostgreSQL database to ensure it has the correct schema, specifically that the version column is of type bigint. If the schema is correct, the issue may lie between the psycopg PostgreSQL driver and your database. You can also check the data records in the schema_migrations table to get a clue as to why a string was returned when executing that SQL statement.

async def migrate() -> None:
    async with connect() as conn, conn.cursor() as cur:
        try:
            results = await cur.execute(
                "select version from schema_migrations order by version desc limit 1",
                prepare=False,
            )
            current_version = (await results.fetchone())["version"]
        except UndefinedTable:
            await cur.execute(
                """
                CREATE TABLE schema_migrations (
                    version bigint primary key,
                    dirty boolean not null
                )
                """,
                prepare=False,
            )
            current_version = -1
        for migration_path in sorted(os.listdir(config.MIGRATIONS_PATH)):
            version = int(migration_path.split("_")[0])
            if version <= current_version:
                continue
            with open(os.path.join(config.MIGRATIONS_PATH, migration_path)) as f:
                await cur.execute(f.read(), prepare=False)
            await cur.execute(
                "INSERT INTO schema_migrations (version, dirty) VALUES (%s, %s)",
                (version, False),
            )
            await logger.ainfo("Applied database migration", version=version)

@dashan1127
Copy link

dashan1127 commented Nov 17, 2024

Hello brother, I have another new question. The langchain/langgraphjs-api service has been started successfully (port is 8123:8000), and the ok interface is returned normally, but when I adjust the front end, I find that 404 is found. Is there something wrong with my opencanvas configuration?
image

搜狗截图20241117214321

So, what should be configured for the hosts of these interfaces /api/threads/search /api/threads /api/assistants etc./api/*?

The log of langchain/langgraphjs-api service is also printed, but 404

2024-11-17T13:47:10.137338Z [info ] POST /api/threads/search 404 0ms [langgraph_api.server] api_revision=4da016a api_variant=local latency_ms=0 method=POST path=/api/threads/search path_params={} proto=1.1 query_string= req_header={'host': '10.68.2xx:8123', 'connection': 'keep-alive', 'content-length': '68', 'user-agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/120.0.0.0 Safari/537.36', 'content-type': 'application/json', 'accept': '/', 'origin': 'http://localhost:3000', 'referer': 'http://localhost:3000/', 'accept-encoding': 'gzip, deflate', 'accept-language': 'zh-CN,zh;q=0.9'} res_header={'content-length': '9', 'content-type': 'text/plain; charset=utf-8'} route=None status=404
2024-11-17T13:47:10.146528Z [info ] POST /api/threads 404 0ms [langgraph_api.server] api_revision=4da016a api_variant=local latency_ms=0 method=POST path=/api/threads path_params={} proto=1.1 query_string= req_header={'host': '10.68.23xx:8123', 'connection': 'keep-alive', 'content-length': '77', 'user-agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/120.0.0.0 Safari/537.36', 'content-type': 'application/json', 'accept': '/', 'origin': 'http://localhost:3000', 'referer': 'http://localhost:3000/', 'accept-encoding': 'gzip, deflate', 'accept-language': 'zh-CN,zh;q=0.9'} res_header={'content-length': '9', 'content-type': 'text/plain; charset=utf-8'} route=None status=404

@lanesky
Copy link

lanesky commented Nov 18, 2024

@dashan1127
Does the port match the LANGGRAPH_API_URL in constants.ts? You can also check the Response tab in Chrome DevTools for any additional information.

@dashan1127
Copy link

dashan1127 commented Nov 28, 2024

Brother, I haven't worked on this recently, and started working on it again today. The front-end service can be started, but when I was doing Q&A, I got an error when I increased the model size. The log showed MODEL_NOT_FOUND. Can you help me figure out what's going on?

2024-11-28T13:28:41.233678Z [info ] HTTP Request: POST http://graph/agent/streamEvents "HTTP/1.1 200 OK" [httpx] api_revision=4da016a api_variant=local
config.configurable!! {
run_id: '1efad8ca-e6ac-6757-b402-a013bbb03ac5',
user_id: '',
graph_id: 'agent',
thread_id: '7eb06ea8-c1ca-4a0a-b026-b7372cc2bf0c',
assistant_id: 'b5b05dbe-5236-41f4-88c3-f5547380e62d',
customModelName: 'gpt-4o',
__pregel_resuming: false,
__pregel_task_id: '989fad13-ef49-50a2-9470-01f11cffdee2',
__pregel_send: [Function: __pregel_send],
__pregel_read: [Function: __pregel_read],
__pregel_checkpointer: RemoteCheckpointer { serde: JsonPlusSerializer {} },
checkpoint_map: { '': '1efad8ca-ee0f-65e0-8000-dff544dbd5bf' },
checkpoint_id: undefined,
checkpoint_ns: 'generatePath:989fad13-ef49-50a2-9470-01f11cffdee2'
}
{"timestamp":"2024-11-28T13:28:41.878Z","level":"error","event":"Error: 404 status code (no body)\n\nTroubleshooting URL: https://js.langchain.com/docs/troubleshooting/errors/MODEL_NOT_FOUND/\n\n at Function.generate (/deps/open-canvas/node_modules/openai/src/error.ts:82:14)\n at OpenAI.makeStatusError (/deps/open-canvas/node_modules/openai/src/core.ts:435:21)\n at OpenAI.makeRequest (/deps/open-canvas/node_modules/openai/src/core.ts:499:24)\n at process.processTicksAndRejections (node:internal/process/task_queues:95:5)\n at async file:///deps/open-canvas/node_modules/@langchain/openai/dist/chat_models.js:1553:29\n at async RetryOperation._fn (/deps/open-canvas/node_modules/p-retry/index.js:50:12)","status":404,"headers":{"connection":"keep-alive","content-encoding":"gzip","content-type":"application/json;charset=utf-8","date":"Thu, 28 Nov 2024 13:28:41 GMT","keep-alive":"timeout=20","transfer-encoding":"chunked","vary":"accept-encoding"},"lc_error_code":"MODEL_NOT_FOUND","attemptNumber":1,"retriesLeft":6,"pregelTaskId":"989fad13-ef49-50a2-9470-01f11cffdee2","stack":"Error: 404 status code (no body)\n\nTroubleshooting URL: https://js.langchain.com/docs/troubleshooting/errors/MODEL_NOT_FOUND/\n\n at Function.generate (/deps/open-canvas/node_modules/openai/src/error.ts:82:14)\n at OpenAI.makeStatusError (/deps/open-canvas/node_modules/openai/src/core.ts:435:21)\n at OpenAI.makeRequest (/deps/open-canvas/node_modules/openai/src/core.ts:499:24)\n at process.processTicksAndRejections (node:internal/process/task_queues:95:5)\n at async file:///deps/open-canvas/node_modules/@langchain/openai/dist/chat_models.js:1553:29\n at async RetryOperation._fn (/deps/open-canvas/node_modules/p-retry/index.js:50:12)","message":"404 status code (no body)\n\nTroubleshooting URL: https://js.langchain.com/docs/troubleshooting/errors/MODEL_NOT_FOUND/\n"}

Also, I'm using AZURE_MODELS and AZURE_KEY

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants