From f8c75d48f7c3c12a4f80dea59a971e8ac930444b Mon Sep 17 00:00:00 2001 From: Andrew Nguonly Date: Tue, 10 Dec 2024 16:34:29 -0800 Subject: [PATCH 1/2] Add section about autoscaling. --- docs/docs/concepts/langgraph_cloud.md | 9 +++++++++ 1 file changed, 9 insertions(+) diff --git a/docs/docs/concepts/langgraph_cloud.md b/docs/docs/concepts/langgraph_cloud.md index 11e5c8afc..cf088718b 100644 --- a/docs/docs/concepts/langgraph_cloud.md +++ b/docs/docs/concepts/langgraph_cloud.md @@ -21,6 +21,15 @@ See the [how-to guide](../cloud/deployment/cloud.md#create-new-deployment) for c | Development | 1 CPU | 1 GB | Up to 1 container | | Production | 1 CPU | 2 GB | Up to 10 containers | +## Autoscaling +`Production` type deployments automatically scale up to 10 containers. Scaling is based on the current request load for a single container. Specifically, the autoscaling implementation scales the deployment so that each container is processing about 10 concurrent requests. For example, if the deployment is processing 20 concurrent requests, the deployment will scale up from 1 container to 2 containers (20 requests / 2 containers = 10 requests per container). If a deployment of 2 containers is processing 10 requests, the deployment will scale down from 2 containers to 1 container (10 requests / 1 container = 10 requests per container). 10 concurrent requests per container is the target for scale up and scale down actions. + +However, 10 concurrent requests per container is not a hard limit. The number of concurrent requests can exceed 10 if there is a sudden burst of requests for example. + +Scale down actions are delayed for 30 minutes before any action is taken. In other words, if the autoscaling implementation decides to scale down a deployment, it will first wait for 30 minutes before scaling down. After 30 minutes, the concurrency metric is recomputed and the deployment will scale down if the concurrency metric has met the target threshold. Otherwise, the deployment remains scaled up. This "cool down" period ensures that deployments do not scale up and down too frequently. + +In the future, the autoscaling implementation may evolve to accommodate other metrics such as background run queue size. + ## Revision A revision is an iteration of a [deployment](#deployment). When a new deployment is created, an initial revision is automatically created. To deploy new code changes or update environment variable configurations for a deployment, a new revision must be created. When a revision is created, a new container image is built automatically. From 6f59d5069f544f9527b0a2bf224f452060f0f0dc Mon Sep 17 00:00:00 2001 From: Andrew Nguonly Date: Tue, 10 Dec 2024 16:43:37 -0800 Subject: [PATCH 2/2] Update formatting of content. --- docs/docs/concepts/langgraph_cloud.md | 7 +++++-- 1 file changed, 5 insertions(+), 2 deletions(-) diff --git a/docs/docs/concepts/langgraph_cloud.md b/docs/docs/concepts/langgraph_cloud.md index cf088718b..6cd6f4b8b 100644 --- a/docs/docs/concepts/langgraph_cloud.md +++ b/docs/docs/concepts/langgraph_cloud.md @@ -22,9 +22,12 @@ See the [how-to guide](../cloud/deployment/cloud.md#create-new-deployment) for c | Production | 1 CPU | 2 GB | Up to 10 containers | ## Autoscaling -`Production` type deployments automatically scale up to 10 containers. Scaling is based on the current request load for a single container. Specifically, the autoscaling implementation scales the deployment so that each container is processing about 10 concurrent requests. For example, if the deployment is processing 20 concurrent requests, the deployment will scale up from 1 container to 2 containers (20 requests / 2 containers = 10 requests per container). If a deployment of 2 containers is processing 10 requests, the deployment will scale down from 2 containers to 1 container (10 requests / 1 container = 10 requests per container). 10 concurrent requests per container is the target for scale up and scale down actions. +`Production` type deployments automatically scale up to 10 containers. Scaling is based on the current request load for a single container. Specifically, the autoscaling implementation scales the deployment so that each container is processing about 10 concurrent requests. For example... -However, 10 concurrent requests per container is not a hard limit. The number of concurrent requests can exceed 10 if there is a sudden burst of requests for example. +- If the deployment is processing 20 concurrent requests, the deployment will scale up from 1 container to 2 containers (20 requests / 2 containers = 10 requests per container). +- If a deployment of 2 containers is processing 10 requests, the deployment will scale down from 2 containers to 1 container (10 requests / 1 container = 10 requests per container). + +10 concurrent requests per container is the target threshold. However, 10 concurrent requests per container is not a hard limit. The number of concurrent requests can exceed 10 if there is a sudden burst of requests. Scale down actions are delayed for 30 minutes before any action is taken. In other words, if the autoscaling implementation decides to scale down a deployment, it will first wait for 30 minutes before scaling down. After 30 minutes, the concurrency metric is recomputed and the deployment will scale down if the concurrency metric has met the target threshold. Otherwise, the deployment remains scaled up. This "cool down" period ensures that deployments do not scale up and down too frequently.