Skip to content

Commit

Permalink
Langchain examples for testing
Browse files Browse the repository at this point in the history
and send with otlp and full chatbot demo

---

So... turns out Langchain uses the v1beta1 prediction service client
under the hood directly.. So we should probably instrument that after
all instead of the main wrapper API.

It also has a streaming option so we should try to support that as well,
and it has ainvoke() for asyncio.
  • Loading branch information
aabmass committed Jan 23, 2025
1 parent aedc6f4 commit d998830
Show file tree
Hide file tree
Showing 8 changed files with 1,901 additions and 0 deletions.
Original file line number Diff line number Diff line change
@@ -0,0 +1,2 @@
.venv
docker-compose.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
3.13
Original file line number Diff line number Diff line change
@@ -0,0 +1,33 @@
This sample contains part of the LangGraph chatbot demo taken from
https://python.langchain.com/docs/tutorials/chatbot, running with OTel instrumentation. It
sends traces and logs to the OTel collector which sends them to GCP. Docker compose wraps
everything to make it easy to run.

## Running the example

I recommend running in Cloud Shell, it's super simple. You will see GenAI spans in trace
explorer right away. Make sure the Vertex and Trace APIs are enabled in the project.

### Cloud Shell or GCE

```sh
git clone --branch=vertex-langgraph https://github.com/aabmass/opentelemetry-python-contrib.git
cd opentelemetry-python-contrib/instrumentation-genai/opentelemetry-instrumentation-vertexai/examples/langgraph-chatbot-demo
docker compose up --build --abort-on-container-exit
```

### Locally with Application Default Credentials

```sh
git clone --branch=vertex-langgraph https://github.com/aabmass/opentelemetry-python-contrib.git
cd opentelemetry-python-contrib/instrumentation-genai/opentelemetry-instrumentation-vertexai/examples/langgraph-chatbot-demo

# Export the credentials to `GOOGLE_APPLICATION_CREDENTIALS` environment variable so it is
# available inside the docker containers
export GOOGLE_APPLICATION_CREDENTIALS=$HOME/.config/gcloud/application_default_credentials.json
# Lets collector read mounted config
export USERID="$(id -u)"
# Specify the project ID
export GOOGLE_CLOUD_PROJECT=<your project id>
docker compose up --build --abort-on-container-exit
```
Original file line number Diff line number Diff line change
@@ -0,0 +1,106 @@
# https://python.langchain.com/docs/tutorials/chatbot

from os import environ
from typing import Sequence

from langchain_core.messages import (
AIMessage,
BaseMessage,
HumanMessage,
SystemMessage,
trim_messages,
)
from langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholder
from langchain_google_vertexai import ChatVertexAI
from langgraph.checkpoint.memory import MemorySaver
from langgraph.graph import START, StateGraph
from langgraph.graph.message import add_messages
from typing_extensions import Annotated, TypedDict

from opentelemetry import trace


def main() -> None:
model = ChatVertexAI(
model="gemini-1.5-flash",
project=environ.get("GOOGLE_CLOUD_PROJECT", None),
)

prompt_template = ChatPromptTemplate.from_messages(
[
(
"system",
"You are a helpful assistant. Answer all questions to the best of your ability in {language}.",
),
MessagesPlaceholder(variable_name="messages"),
]
)

class State(TypedDict):
messages: Annotated[Sequence[BaseMessage], add_messages]
language: str

trimmer = trim_messages(
max_tokens=200,
strategy="last",
token_counter=model,
include_system=True,
allow_partial=False,
start_on="human",
)

messages = [
SystemMessage(content="you're a good assistant"),
HumanMessage(content="hi! I'm bob"),
AIMessage(content="hi!"),
HumanMessage(content="I like vanilla ice cream"),
AIMessage(content="nice"),
HumanMessage(content="whats 2 + 2"),
AIMessage(content="4"),
HumanMessage(content="thanks"),
AIMessage(content="no problem!"),
HumanMessage(content="having fun?"),
AIMessage(content="yes!"),
]

workflow = StateGraph(state_schema=State)

def call_model(state: State):
trimmed_messages = trimmer.invoke(state["messages"])
prompt = prompt_template.invoke(
{"messages": trimmed_messages, "language": state["language"]}
)
response = model.invoke(prompt)
return {"messages": [response]}

workflow.add_edge(START, "model")
workflow.add_node("model", call_model)

memory = MemorySaver()
app = workflow.compile(checkpointer=memory)

config = {"configurable": {"thread_id": "abc567"}}
query = "What is my name?"
language = "English"

input_messages = messages + [HumanMessage(query)]
output = app.invoke(
{"messages": input_messages, "language": language},
config,
)
output["messages"][-1].pretty_print()

config = {"configurable": {"thread_id": "abc678"}}
query = "What math problem did I ask?"
language = "English"

input_messages = messages + [HumanMessage(query)]
output = app.invoke(
{"messages": input_messages, "language": language},
config,
)
output["messages"][-1].pretty_print()


with trace.get_tracer(__name__).start_as_current_span("demo-root-span"):
main()
Original file line number Diff line number Diff line change
@@ -0,0 +1,43 @@
services:
app:
build:
dockerfile_inline: |
FROM ghcr.io/astral-sh/uv:python3.12-bookworm-slim
RUN apt-get update && apt-get install -y git
WORKDIR app/
COPY pyproject.toml uv.lock /app
RUN uv sync --frozen --no-dev
ENV PATH="/app/.venv/bin:$PATH"
COPY . /app
ENTRYPOINT []
CMD ["opentelemetry-instrument", "python", "chatbot.py"]
volumes:
- ${GOOGLE_APPLICATION_CREDENTIALS:-/dev/null}:${GOOGLE_APPLICATION_CREDENTIALS:-/dev/null}:ro
environment:
- OTEL_EXPORTER_OTLP_ENDPOINT=http://otelcol:4317
- OTEL_SERVICE_NAME=langgraph-chatbot-demo
- OTEL_PYTHON_LOGGING_AUTO_INSTRUMENTATION_ENABLED=true
- OTEL_INSTRUMENTATION_GENAI_CAPTURE_MESSAGE_CONTENT=true

- GOOGLE_CLOUD_PROJECT
- GOOGLE_CLOUD_QUOTA_PROJECT
- GOOGLE_APPLICATION_CREDENTIALS
depends_on:
- otelcol

otelcol:
image: otel/opentelemetry-collector-contrib:0.118.0
volumes:
- ./otel-collector-config.yaml:/etc/otelcol-contrib/config.yaml:ro
- ${GOOGLE_APPLICATION_CREDENTIALS:-/dev/null}:${GOOGLE_APPLICATION_CREDENTIALS:-/dev/null}:ro
environment:
- GOOGLE_CLOUD_PROJECT
- GOOGLE_CLOUD_QUOTA_PROJECT
- GOOGLE_APPLICATION_CREDENTIALS
# If the collector does not have permission to read the mounted volumes, set
# USERID=$(id -u) to run the container as the current user
user: $USERID

volumes:
logs:
Original file line number Diff line number Diff line change
@@ -0,0 +1,42 @@
receivers:
otlp:
protocols:
grpc:
endpoint: "0.0.0.0:4317"

extensions:
googleclientauth:
project: ${GOOGLE_CLOUD_PROJECT}
quota_project: ${GOOGLE_CLOUD_QUOTA_PROJECT}
scopes:
- "https://www.googleapis.com/auth/trace.append"
- "https://www.googleapis.com/auth/cloud-platform"

processors:
resource:
attributes:
- key: gcp.project_id
value: ${GOOGLE_CLOUD_PROJECT}
action: insert

exporters:
googlecloud:
project: ${GOOGLE_CLOUD_PROJECT}
log:
default_log_name: "collector-otlp-logs"
otlp:
endpoint: https://telemetry.us-central1.rep.googleapis.com:443
auth:
authenticator: googleclientauth

service:
extensions: [googleclientauth]
pipelines:
traces:
receivers: [otlp]
processors: [resource]
exporters: [otlp]
logs:
receivers: [otlp]
processors: [resource]
exporters: [googlecloud]
Original file line number Diff line number Diff line change
@@ -0,0 +1,22 @@
[project]
name = "langgraph-chatbot-demo"
version = "0.1.0"
description = "Add your description here"
readme = "README.md"
requires-python = ">=3.9"
dependencies = [
"langchain-core>=0.3.31",
"langchain-google-vertexai>=2.0.7",
"langgraph>0.2.27",
"opentelemetry-distro>=0.50b0",
"opentelemetry-exporter-otlp-proto-grpc>=1.29.0",
"opentelemetry-instrumentation-vertexai",
]

[tool.uv.sources]
opentelemetry-instrumentation-vertexai = { git = "https://github.com/aabmass/opentelemetry-python-contrib.git", subdirectory = "instrumentation-genai/opentelemetry-instrumentation-vertexai", branch = "vertex-langgraph" }

[dependency-groups]
dev = [
"ruff>=0.9.2",
]
Loading

0 comments on commit d998830

Please sign in to comment.