The aim of this repository is to provide some example patterns for bringing Langflow into an Enterprise Production envrionment. It currently provides examples for:
- Docker containers
- Render (using Docker container)
- Kubernetes (with Helm)
- [TODO] Terraform
- Langflow will be used as an API by various clients (JavaScript, React, Python, Java, etc.).
- Production Langflow should not provide front-end UI access.
- Langflow's basic authentication mechanics are not Enterprise-ready:
- Langflow is to be configured as "open" and accessible to any caller
- Langflow services are configured behind an API gateway and/or private networking
- Langflow can be configured to use a common centralized Postgres database:
- This is appropriate for development environments
- This is less appropriate for post-development environments, as the whole environment would be upgraded at once as a Flow is updated.
- This repository takes the opinion that it is better to gradually introduce new code (Flows) to an environment, as it means that faulty code can be rolled back without having affected all users.
- Feel free to configure a Postgres database per environment – though this repository does not detail this, the configuration process is well-documented elsewhere.
- This is a general-purpose guide, provided under the MIT License. You should take this as an instructive starting point for your own implementation and apply your own Enterprise architucture patterns to it.
This repo is currently based on Langflow v1.0.17; future versions of Langflow may change the advice.
Contributions welcome, please raise a Pull Request!
Flows are developed using a Langflow front-end GUI, sharing a common Postgres database. Configuring this is out of scope of this repository, you may start with the Langflow configuration page.
As Langflow versions are updated currently each flow needs to be reviewed:
- Components may need to be updated - a
⚠️ icon is shown on the top of such components. - Any code customizations need to be re-implemented after update.
Flow version management is done manually, via Export to a version control system such as Git.
Langflow does not currently provide a direct means of packaging or importing packages. How to manage this is addressed below.
Post-Development environments are not expected to have a front-end GUI, but rather run in "backend-only" mode. How these environments are configured will be dicussed later, but for now simply understand that the environment specification includes Flows to be executed from that environment.
Note: A few Langflow components use the Langflow Postgres database (such as Chat Memory
) to persist state
across sessions. As mentioned earlier, this repository does not make use of a shared Postgres database – each
runtime operates in isolation from other runtimes. While these components will function (i.e. memory will be stored
locally), in a multi-node environment the behavior will likely be inconsistent. You should use components
like Store Message
(with External Memory) to manage state explicitly.
Note: It is most important that the Langflow versions are aligned: the exported .json
files do not make
explicit mention of the Langflow version, but the Langflow runtime is, at present, very sensitive to version
mis-matches. Be sure that the version you are importing is compatible to the runtime version.
Most Flows make use of "Global Variables" to store things like API keys. Within the runtime backend, these
are specified via environment variables. For example, if your flow references a variable named GOOGLE_API_KEY
,
you should set an environment variable named GOOGLE_API_KEY
with the appropriate value.
Langflow maintains a Docker Repository at https://hub.docker.com/r/langflowai/langflow , which is referenced by the included Dockerfile. Note these two lines:
COPY flows /app/flows
ENV LANGFLOW_LOAD_FLOWS_PATH=/app/flows
During the container build, files are copied from the local flows
directory into /app/flows
, and when the
container starts, all of the flows specified in the path pointed to by the LANGFLOW_LOAD_FLOWS_PATH
are
imported into the runtime.
Note: If using Docker, you should be sure to specify the Endpoint Name
on each Flow's Settings page. Flow IDs
are currently generated and updated during the import process, with the included Flow ID being replaced.
Using Endpoint Name
controls this value, in addition to being more user-friendly.
To build a Docker image, then, you:
- Put the appropriate version of all of the exported Flow
.json
files into theflows/
directory; - Run
docker build
with the appropriate tag; - Push the Docker image to your container registry.
And then run the Docker container as normal, forwarding API requests to the container port 7680
. For example, a
local image:tag mylangflow:latest
using environment variables in file .env
could be run with:
docker run -d --env-file .env -p 7860:7860 mylangflow:latest
(An example .env
file is provided as .env.example
, supporting the example flows within this repo.).
Render is a popular service that hosts a number of application services, including Docker-based.
A blueprint file render.yaml is provided within this repository, and makes use of the Dockerfile
also within this repository.
You first need to create an environment group named langflow-demo-flow-secrets
(this is referenced in the render.yaml
file), and within
the environment group put the environment variables referenced by your flows. An example .env
file is provided as .env.example
,
supporting the example flows within this repo.
From here you can simply click this button:
Note you'll need to use the Standard plan, as 512 MB of memory is insufficient for Langflow. You can find more extensive options and information about configuring the blueprint for things like scaling in the Render documentation.
This asssumes you have a Kubernetes cluster, and have both kubectl
and helm
installed and configured.
Langflow maintains a Helm chart for Langflow runtime at https://github.com/langflow-ai/langflow-helm-charts/tree/main/charts/langflow-runtime .
An stripped-down example values.yaml file is provided within this repository, but the definitive version is in the langflow-helm-charts repo.
A few points:
- In the
image:
parameter you can specify justtag:
(which default to the Langflow Dockerlangflowai/langflow-backend
repo), and also specifyrepository:
if you wanted to use your own image;- If you default to the Langflow image, you should also specify
downloadFlows:
.- Each
url:
should correspond with a downloadable URL. On GitHub, it is advised to use a tag or a SHA, as CDN caching has been observed to result in stale flows being downloaded. - An additional parameter
endpoint:
allows you to override (or specify) the endpoint name of the flow. - Note that
basicAuth
andheaders.Authorization:
provide authentication mechanics to the download URL, and this is not currently configurable as a K8s secret.
- Each
- If necessary you can specify an
imagePullSecrets
entry following the standard convention for K8s image pull secrets.
- If you default to the Langflow image, you should also specify
- Environment variables for each pod are specified in the
env:
parameter; these are either plaintext values or else reference K8s secrets. The environment variablename:
should correspond with the "Global Variable" value referenced by your flows. For example, if your flow references a variable namedGOOGLE_API_KEY
, you should set an environment variable namedGOOGLE_API_KEY
with the appropriate
To install the Helm charts:
helm repo add langflow https://langflow-ai.github.io/langflow-helm-charts
helm repo update
And to update/upgrade to the latest chart versions is just:
helm repo update
Assuming a namespace langflow
exists, install the chart values file named values.yaml
the first time with:
helm install -n langflow langflow-runtime langflow/langflow-runtime -f values.yaml
To upgrade to updated values.yaml
:
helm upgrade -n langflow langflow-runtime langflow/langflow-runtime -f values.yaml
Most of the time, an upgrade
triggers a deployment restart, but not always. You can manually restart the
deployment with:
kubectl rollout restart deployment -n langflow langflow-runtime
In a Kubernetes environment, backend services such as the Langflow API are not typically exposed (via an Ingress) to the outside world (i.e. your computer). However, for testing purposes you may wish to create a temporary port forward:
kubectl port-forward -n langflow service/langflow-runtime 7860:7860
Assuming you have a backend runtime available on localhost
port 7860
, and have both curl
and jq
available on
the command line, you can do some basic testing like:
curl -X POST \
"http://127.0.0.1:7860/api/v1/run/basic-memory-chatbot?stream=false" \
-H 'Content-Type: application/json'\
-d '{"input_value": "what is my name",
"output_type": "chat",
"input_type": "chat",
"session_id": "9c5ac5fe-caae-471e-a3cb-b0a69428af95"
}' | jq '.outputs[0].outputs[0].results.message.text'
You may find it easier to use a tool like Postman rather than the command line!
There are three environment variables that need to be set:
ASTRA_DB_API_ENDPOINT
ASTRA_DB_APPLICATION_TOKEN
OPENAI_API_KEY
If you are using the provided values.yaml
, there should be two secrets named thusly:
DataStax Astra DB Token:
apiVersion: v1
kind: Secret
metadata:
name: astra-db
type: Opaque
data:
ASTRA_DB_API_ENDPOINT: <base64-encoded Astra DB Endpoint>
ASTRA_DB_APPLICATION_TOKEN: <base64-encoded Astra Token>
And OpenAI API Key:
apiVersion: v1
kind: Secret
metadata:
name: openai-api-key
type: Opaque
data:
OPENAI_API_KEY: <base64-encoded OpenAI API key>
Included in the repo is a file example.html which makes use of the Langflow embedded chat widget, and assumes the API endpoint is available on https://localhost:7860 (i.e. port forwarding is enabled).
It's usage is fairly simple: a table of available session UUIDs can be maintained at the top, and available chatbots can be interacted with using these session identifiers.