diff --git a/.github/workflows/regenerate-tutorials.yml b/.github/workflows/regenerate-tutorials.yml index 69c6dcd..9a17bc6 100644 --- a/.github/workflows/regenerate-tutorials.yml +++ b/.github/workflows/regenerate-tutorials.yml @@ -84,6 +84,11 @@ jobs: "${GITHUB_WORKSPACE}/loki/docs/sources/query/logcli/logcli-tutorial.md" "${GITHUB_WORKSPACE}/killercoda/loki/logcli-tutorial" working-directory: killercoda/tools/transformer + - run: > + ./transformer + "${GITHUB_WORKSPACE}/loki/docs/sources/send-data/k8s-monitoring-helm/_index.md" + "${GITHUB_WORKSPACE}/killercoda/loki/k8s-monitoring-helm" + working-directory: killercoda/tools/transformer - run: > ./transformer "${GITHUB_WORKSPACE}/grafana/docs/sources/tutorials/alerting-get-started/index.md" diff --git a/loki/alloy-kafka-logs/preprocessed.md b/loki/alloy-kafka-logs/preprocessed.md index ee5f56d..01f4ab8 100755 --- a/loki/alloy-kafka-logs/preprocessed.md +++ b/loki/alloy-kafka-logs/preprocessed.md @@ -4,6 +4,8 @@ menuTitle: Sending Logs to Loki via Kafka using Alloy description: Configuring Grafana Alloy to receive logs via Kafka and send them to Loki. weight: 250 killercoda: + comment: | + This file is used to generate the interactive tutorial for sending logs to Loki via Kafka using Alloy. Please do not change url's with placeholders from the code snippets. This tutorial is assumes they remain static. title: Sending Logs to Loki via Kafka using Alloy description: Configuring Grafana Alloy to receive logs via Kafka and send them to Loki. backend: diff --git a/loki/alloy-otel-logs/preprocessed.md b/loki/alloy-otel-logs/preprocessed.md index 9e121ac..653c168 100755 --- a/loki/alloy-otel-logs/preprocessed.md +++ b/loki/alloy-otel-logs/preprocessed.md @@ -4,6 +4,8 @@ menuTitle: Sending OpenTelemetry logs to Loki using Alloy description: Configuring Grafana Alloy to send OpenTelemetry logs to Loki. weight: 250 killercoda: + comment: | + This file is used to generate the interactive tutorial for sending logs to Loki via otel using Alloy. Please do not change url's with placeholders from the code snippets. This tutorial is assumes they remain static. title: Sending OpenTelemetry logs to Loki using Alloy description: Configuring Grafana Alloy to send OpenTelemetry logs to Loki. backend: diff --git a/loki/k8s-monitoring-helm/preprocessed.md b/loki/k8s-monitoring-helm/preprocessed.md index ec02c82..4f1ae04 100755 --- a/loki/k8s-monitoring-helm/preprocessed.md +++ b/loki/k8s-monitoring-helm/preprocessed.md @@ -136,7 +136,7 @@ helm install --values grafana-values.yml grafana grafana/grafana --namespace met As before, the command also includes a `values` file that specifies the configuration for Grafana. There are two important configuration attributes to take note of: -1. `adminUser` and `adminPassword`: These are the credentials you will use to log in to Grafana. The values are `admin` and `adminadminadmin` respectively. The recommended practice is to either use a Kubernetes secret or allow Grafana to generate a password for you. For more details on how to configure the Grafana Helm chart, refer to the Grafana Helm [documentation](https://grafana.com/docs/grafana/latest/installation/helm/). +1. `adminUser` and `adminPassword`: These are the credentials you will use to log in to Grafana. The values are `admin` and `adminadminadmin` respectively. The recommended practice is to either use a Kubernetes secret or allow Grafana to generate a password for you. For more details on how to configure the Grafana Helm chart, refer to the Grafana Helm [documentation](https://grafana.com/docs/grafana//installation/helm/). 2. `datasources`: This section of the configuration lets you define the data sources that Grafana should use. In this tutorial, you will define a Loki data source. The data source is defined as follows: @@ -163,12 +163,12 @@ As before, the command also includes a `values` file that specifies the configur ## Deploy the Kubernetes Monitoring Helm chart -The Kubernetes Monitoring Helm chart is used for gathering, scraping, and forwarding Kubernetes telemetry data to a Grafana stack. This includes the ability to collect metrics, logs, traces, and continuous profiling data. The scope of this tutorial is to deploy the Kubernetes Monitoring Helm chart to collect pod logs and Kubernetes events. +The Kubernetes Monitoring Helm chart is used for gathering, scraping, and forwarding Kubernetes telemetry data to a Grafana stack. This includes the ability to collect metrics, logs, traces, and continuous profiling data. The scope of this tutorial is to deploy the Kubernetes Monitoring Helm chart to collect pod logs and Kubernetes events. To deploy the Kubernetes Monitoring Helm chart run the following command: ```bash -helm install --values ./k8s-monitoring-values.yml k8s grafana/k8s-monitoring -n meta +helm install --values ./k8s-monitoring-values.yml k8s grafana/k8s-monitoring -n meta ``` Within the configuration file `k8s-monitoring-values.yml` we have defined the following: @@ -254,20 +254,20 @@ kubectl --namespace meta port-forward $POD_NAME 3000 --address 0.0.0.0 > **Tip:** > This will make your terminal unusable until you stop the port-forwarding process. To stop the process, press `Ctrl + C`. -This command will port-forward the Grafana service to your local machine on port `3000`. +This command will port-forward the Grafana service to your local machine on port `3000`. -You can now access Grafana by navigating to [http://localhost:3000](http://localhost:3000) in your browser. The default credentials are `admin` and `adminadminadmin`. +You can now access Grafana by navigating to [http://localhost:3000](http://localhost:3000) in your browser. The default credentials are `admin` and `adminadminadmin`. -One of the first places you should visit is Explore Logs which lets you automatically visualize and explore your logs without having to write queries: +One of the first places you should visit is Logs Drilldown which lets you automatically visualize and explore your logs without having to write queries: [http://localhost:3000/a/grafana-lokiexplore-app](http://localhost:3000/a/grafana-lokiexplore-app) -{{< figure max-width="100%" src="/media/docs/loki/k8s-logs-explore-logs.png" caption="Explore Logs view of K8s logs" alt="Explore Logs view of K8s logs" >}} +{{< figure max-width="100%" src="/media/docs/loki/k8s-logs-explore-logs.png" caption="Logs Drilldown view of K8s logs" alt="Logs Drilldown view of K8s logs" >}} -## (Optional): View the Alloy UI +## (Optional) View the Alloy UI The Kubernetes Monitoring Helm chart deploys Grafana Alloy to collect and forward telemetry data from the Kubernetes cluster. The Helm chart is designed to abstract you away from creating an Alloy configuration file. However if you would like to understand the pipeline you can view the Alloy UI. To access the Alloy UI, you will need to port-forward the Alloy service to your local machine. To do this, run the following command: @@ -315,8 +315,8 @@ and navigate to [http://localhost:3000/a/grafana-lokiexplore-app](http://localho In this tutorial, you learned how to deploy Loki, Grafana, and the Kubernetes Monitoring Helm chart to collect and store logs from a Kubernetes cluster. We have deployed a minimal test version of each of these Helm charts to demonstrate how quickly you can get started with Loki. It is now worth exploring each of these Helm charts in more detail to understand how to scale them to meet your production needs: -* [Loki Helm chart](https://grafana.com/docs/loki/latest/setup/install/helm/) -* [Grafana Helm chart](https://grafana.com/docs/grafana/latest/installation/helm/) +* [Loki Helm chart](https://grafana.com/docs/loki//setup/install/helm/) +* [Grafana Helm chart](https://grafana.com/docs/grafana//installation/helm/) * [Kubernetes Monitoring Helm chart](https://grafana.com/docs/grafana-cloud/monitor-infrastructure/kubernetes-monitoring/) diff --git a/loki/k8s-monitoring-helm/step5.md b/loki/k8s-monitoring-helm/step5.md index 76f0966..5aeb043 100644 --- a/loki/k8s-monitoring-helm/step5.md +++ b/loki/k8s-monitoring-helm/step5.md @@ -5,7 +5,7 @@ The Kubernetes Monitoring Helm chart is used for gathering, scraping, and forwar To deploy the Kubernetes Monitoring Helm chart run the following command: ```bash -helm install --values ./k8s-monitoring-values.yml k8s grafana/k8s-monitoring -n meta +helm install --values ./k8s-monitoring-values.yml k8s grafana/k8s-monitoring -n meta ```{{exec}} Within the configuration file `k8s-monitoring-values.yml`{{copy}} we have defined the following: diff --git a/loki/k8s-monitoring-helm/step6.md b/loki/k8s-monitoring-helm/step6.md index 0529a6b..20cf220 100644 --- a/loki/k8s-monitoring-helm/step6.md +++ b/loki/k8s-monitoring-helm/step6.md @@ -14,7 +14,7 @@ This command will port-forward the Grafana service to your local machine on port You can now access Grafana by navigating to [http://localhost:3000]({{TRAFFIC_HOST1_3000}}) in your browser. The default credentials are `admin`{{copy}} and `adminadminadmin`{{copy}}. -One of the first places you should visit is Explore Logs which lets you automatically visualize and explore your logs without having to write queries: +One of the first places you should visit is Logs Drilldown which lets you automatically visualize and explore your logs without having to write queries: [http://localhost:3000/a/grafana-lokiexplore-app]({{TRAFFIC_HOST1_3000}}/a/grafana-lokiexplore-app) -![Explore Logs view of K8s logs](https://grafana.com/media/docs/loki/k8s-logs-explore-logs.png) +![Logs Drilldown view of K8s logs](https://grafana.com/media/docs/loki/k8s-logs-explore-logs.png) diff --git a/loki/k8s-monitoring-helm/step7.md b/loki/k8s-monitoring-helm/step7.md index 1d20ab7..e10b5a9 100644 --- a/loki/k8s-monitoring-helm/step7.md +++ b/loki/k8s-monitoring-helm/step7.md @@ -1,4 +1,4 @@ -# (Optional): View the Alloy UI +# (Optional) View the Alloy UI The Kubernetes Monitoring Helm chart deploys Grafana Alloy to collect and forward telemetry data from the Kubernetes cluster. The Helm chart is designed to abstract you away from creating an Alloy configuration file. However if you would like to understand the pipeline you can view the Alloy UI. To access the Alloy UI, you will need to port-forward the Alloy service to your local machine. To do this, run the following command: diff --git a/tools/alloy-proxy/config.alloy b/tools/alloy-proxy/config.alloy index 4899efa..c992ece 100644 --- a/tools/alloy-proxy/config.alloy +++ b/tools/alloy-proxy/config.alloy @@ -1,3 +1,4 @@ +// Receives Logss over HTTP loki.write "local" { endpoint { url = "https://logs-prod-021.grafana.net/loki/api/v1/push" @@ -19,4 +20,25 @@ loki.source.api "loki_push_api" { labels = { forwarded = "true", } +} + +// Receives metrics over HTTP +prometheus.receive_http "api" { + http { + listen_address = "0.0.0.0" + listen_port = 9998 + } + forward_to = [prometheus.remote_write.local.receiver] +} + +// Send metrics to a locally running Mimir. +prometheus.remote_write "local" { + endpoint { + url = "https://prometheus-prod-36-prod-us-west-0.grafana.net/api/prom/push" + + basic_auth { + username = sys.env("GRAFANA-CLOUD-USERNAME-METRICS") + password = sys.env("GRAFANA_CLOUD_PASSWORD") + } + } } \ No newline at end of file diff --git a/workshops/course-tracker-test/finish.md b/workshops/course-tracker-test/finish.md index ec1bea6..551506c 100644 --- a/workshops/course-tracker-test/finish.md +++ b/workshops/course-tracker-test/finish.md @@ -1,23 +1,23 @@ -# Summary - -In this example, we configured the OpenTelemetry Collector to receive logs from an example application and send them to Loki using the native OTLP endpoint. Make sure to also consult the Loki configuration file `loki-config.yaml`{{copy}} to understand how we have configured Loki to receive logs from the OpenTelemetry Collector. +# What next? ## Back to docs -Head back to where you started from to continue with the [Loki documentation](https://grafana.com/docs/loki/latest/send-data/otel). +Head back to where you started from to continue with the Loki documentation: [Loki documentation](https://grafana.com/docs/loki/latest/get-started/quick-start/). -# Further reading +You have completed the Loki Quickstart demo. So where to go next? Here are a few suggestions: -For more information on the OpenTelemetry Collector and the native OTLP endpoint of Loki, refer to the following resources: +- **Deploy:** Loki can be deployed in multiple ways. For production usecases we recommend deploying Loki via the [Helm chart](https://grafana.com/docs/loki/latest/setup/install/helm/). -- [Loki OTLP endpoint](https://grafana.com/docs/loki/latest/send-data/otel/) +- **Send Logs:** In this example we used Grafana Alloy to collect and send logs to Loki. However there are many other methods you can use depending upon your needs. For more information see [send data](https://grafana.com/docs/loki/next/send-data/). -- [How is native OTLP endpoint different from Loki Exporter](https://grafana.com/docs/loki/latest/send-data/otel/native_otlp_vs_loki_exporter) +- **Query Logs:** LogQL is an extensive query language for logs and contains many tools to improve log retrival and generate insights. For more information see the [Query section](https://grafana.com/docs/loki/latest/query/). -- [OpenTelemetry Collector Configuration](https://opentelemetry.io/docs/collector/configuration/) +- **Alert:** Lastly you can use the ruler component of Loki to create alerts based on log queries. For more information see [Alerting](https://grafana.com/docs/loki/latest/alert/). -# Complete metrics, logs, traces, and profiling example +## Complete metrics, logs, traces, and profiling example -If you would like to use a demo that includes Mimir, Loki, Tempo, and Grafana, you can use [Introduction to Metrics, Logs, Traces, and Profiling in Grafana](https://github.com/grafana/intro-to-mlt). `Intro-to-mltp`{{copy}} provides a self-contained environment for learning about Mimir, Loki, Tempo, and Grafana. +If you would like to run a demonstration environment that includes Mimir, Loki, Tempo, and Grafana, you can use [Introduction to Metrics, Logs, Traces, and Profiling in Grafana](https://github.com/grafana/intro-to-mlt). +It’s a self-contained environment for learning about Mimir, Loki, Tempo, and Grafana. -The project includes detailed explanations of each component and annotated configurations for a single-instance deployment. Data from `intro-to-mltp`{{copy}} can also be pushed to Grafana Cloud. +The project includes detailed explanations of each component and annotated configurations for a single-instance deployment. +You can also push the data from the environment to [Grafana Cloud](https://grafana.com/cloud/). diff --git a/workshops/course-tracker-test/index.json b/workshops/course-tracker-test/index.json index a6a2041..65c5326 100644 --- a/workshops/course-tracker-test/index.json +++ b/workshops/course-tracker-test/index.json @@ -1,10 +1,10 @@ { - "title": "Getting started with the OpenTelemetry Collector and Loki tutorial", - "description": "A Tutorial configuring the OpenTelemetry Collector to send OpenTelemetry logs to Loki", + "title": "Loki Quickstart Demo", + "description": "This sandbox provides an online enviroment for testing the Loki quickstart demo.", "details": { "intro": { "text": "intro.md", - "foreground": "update.sh" + "foreground": "setup.sh" }, "steps": [ { @@ -15,6 +15,21 @@ }, { "text": "step3.md" + }, + { + "text": "step4.md" + }, + { + "text": "step5.md" + }, + { + "text": "step6.md" + }, + { + "text": "step7.md" + }, + { + "text": "step8.md" } ], "finish": { diff --git a/workshops/course-tracker-test/intro.md b/workshops/course-tracker-test/intro.md index c74d413..3dd571f 100644 --- a/workshops/course-tracker-test/intro.md +++ b/workshops/course-tracker-test/intro.md @@ -1,30 +1,11 @@ -# Getting started with the OpenTelemetry Collector and Loki tutorial +# Quickstart to run Loki locally -The OpenTelemetry Collector offers a vendor-agnostic implementation of how to receive, process, and export telemetry data. With the introduction of the OTLP endpoint in Loki, you can now send logs from applications instrumented with OpenTelemetry to Loki using the OpenTelemetry Collector in native OTLP format. -In this example, we will teach you how to configure the OpenTelemetry Collector to receive logs in the OpenTelemetry format and send them to Loki using the OTLP HTTP protocol. This will involve configuring the following components in the OpenTelemetry Collector: +This quick start guide will walk you through deploying Loki in single binary mode (also known as [monolithic mode](https://grafana.com/docs/loki/latest/get-started/deployment-modes/#monolithic-mode)) using Docker Compose. Grafana Loki is only one component of the Grafana observability stack for logs. In this tutorial we will refer to this stack as the **Loki stack**. The Loki stack consists of the following components: -- **OpenTelemetry Receiver:** This component will receive logs in the OpenTelemetry format via HTTP and gRPC. +![Loki Stack](https://grafana.com/media/docs/loki/getting-started-loki-stack-3.png) -- **OpenTelemetry Processor:** This component will accept telemetry data from other `otelcol.*`{{copy}} components and place them into batches. Batching improves the compression of data and reduces the number of outgoing network requests required to transmit data. +- **Alloy**: [Grafana Alloy](https://grafana.com/docs/alloy/latest/) is an open source telemetry collector for metrics, logs, traces, and continuous profiles. In this quickstart guide Grafana Alloy has been configured to tail logs from all Docker containers and forward them to Loki. -- **OpenTelemetry Exporter:** This component will accept telemetry data from other `otelcol.*`{{copy}} components and write them over the network using the OTLP HTTP protocol. We will use this exporter to send the logs to the Loki native OTLP endpoint. +- **Loki**: A log aggregation system to store the collected logs. For more information on what Loki is, see the [Loki overview](https://grafana.com/docs/loki/latest/get-started/overview/). -## Scenario - -In this scenario, we have a microservices application called the Carnivorous Greenhouse. This application consists of the following services: - -- **User Service:** Manages user data and authentication for the application. Such as creating users and logging in. - -- **Plant Service:** Manages the creation of new plants and updates other services when a new plant is created. - -- **Simulation Service:** Generates sensor data for each plant. - -- **Websocket Service:** Manages the websocket connections for the application. - -- **Bug Service:** A service that when enabled, randomly causes services to fail and generate additional logs. - -- **Main App:** The main application that ties all the services together. - -- **Database:** A database that stores user and plant data. - -Each service generates logs using the OpenTelemetry SDK and exports to the OpenTelemetry Collector in the OpenTelemetry format (OTLP). The Collector then ingests the logs and sends them to Loki. +- **Grafana**: [Grafana](https://grafana.com/docs/grafana/latest/) is an open-source platform for monitoring and observability. Grafana will be used to query and visualize the logs stored in Loki. diff --git a/workshops/course-tracker-test/preprocessed.md b/workshops/course-tracker-test/preprocessed.md index 4eb12d6..954fc82 100755 --- a/workshops/course-tracker-test/preprocessed.md +++ b/workshops/course-tracker-test/preprocessed.md @@ -1,363 +1,528 @@ --- -title: Getting started with the OpenTelemetry Collector and Loki tutorial -menuTitle: OTel Collector tutorial -description: A Tutorial configuring the OpenTelemetry Collector to send OpenTelemetry logs to Loki -weight: 300 +title: Quickstart to run Loki locally +menuTitle: Loki quickstart +weight: 200 +description: How to deploy Loki locally using Docker Compose. killercoda: - title: Getting started with the OpenTelemetry Collector and Loki tutorial - description: A Tutorial configuring the OpenTelemetry Collector to send OpenTelemetry logs to Loki - preprocessing: - substitutions: - - regexp: loki-fundamentals_otel-collector_1 - replacement: loki-fundamentals_otel-collector_1 + comment: | + The killercoda front matter and the HTML comments that start ' +# Quickstart to run Loki locally -# Getting started with the OpenTelemetry Collector and Loki tutorial +This quick start guide will walk you through deploying Loki in single binary mode (also known as [monolithic mode](https://grafana.com/docs/loki//get-started/deployment-modes/#monolithic-mode)) using Docker Compose. Grafana Loki is only one component of the Grafana observability stack for logs. In this tutorial we will refer to this stack as the **Loki stack**. The Loki stack consists of the following components: -The OpenTelemetry Collector offers a vendor-agnostic implementation of how to receive, process, and export telemetry data. With the introduction of the OTLP endpoint in Loki, you can now send logs from applications instrumented with OpenTelemetry to Loki using the OpenTelemetry Collector in native OTLP format. -In this example, we will teach you how to configure the OpenTelemetry Collector to receive logs in the OpenTelemetry format and send them to Loki using the OTLP HTTP protocol. This will involve configuring the following components in the OpenTelemetry Collector: +{{< figure max-width="100%" src="/media/docs/loki/getting-started-loki-stack-3.png" caption="Loki Stack" alt="Loki Stack" >}} -- **OpenTelemetry Receiver:** This component will receive logs in the OpenTelemetry format via HTTP and gRPC. -- **OpenTelemetry Processor:** This component will accept telemetry data from other `otelcol.*` components and place them into batches. Batching improves the compression of data and reduces the number of outgoing network requests required to transmit data. -- **OpenTelemetry Exporter:** This component will accept telemetry data from other `otelcol.*` components and write them over the network using the OTLP HTTP protocol. We will use this exporter to send the logs to the Loki native OTLP endpoint. +* **Alloy**: [Grafana Alloy](https://grafana.com/docs/alloy/latest/) is an open source telemetry collector for metrics, logs, traces, and continuous profiles. In this quickstart guide Grafana Alloy has been configured to tail logs from all Docker containers and forward them to Loki. +* **Loki**: A log aggregation system to store the collected logs. For more information on what Loki is, see the [Loki overview](https://grafana.com/docs/loki//get-started/overview/). +* **Grafana**: [Grafana](https://grafana.com/docs/grafana/latest/) is an open-source platform for monitoring and observability. Grafana will be used to query and visualize the logs stored in Loki. +## Before you begin -## Dependencies - -Before you begin, ensure you have the following to run the demo: - -- Docker -- Docker Compose +Before you start, you need to have the following installed on your local system: +- Install [Docker](https://docs.docker.com/install) +- Install [Docker Compose](https://docs.docker.com/compose/install) > **Tip:** -> Alternatively, you can try out this example in our interactive learning environment: [Getting started with the OpenTelemetry Collector and Loki tutorial](https://killercoda.com/grafana-labs/course/loki/otel-collector-getting-started). +> Alternatively, you can try out this example in our interactive learning environment: [Loki Quickstart Sandbox](https://killercoda.com/grafana-labs/course/loki/loki-quickstart). > > It's a fully configured environment with all the dependencies already installed. > > ![Interactive](/media/docs/loki/loki-ile.svg) > -> Provide feedback, report bugs, and raise issues for the tutorial in the [Grafana Killercoda repository](https://github.com/grafana/killercoda). +> Provide feedback, report bugs, and raise issues in the [Grafana Killercoda repository](https://github.com/grafana/killercoda). + + + + + +## Deploy the Loki stack + + +> **Note:** +> This quickstart assumes you are running Linux or MacOS. Windows users can follow the same steps using [Windows Subsystem for Linux](https://learn.microsoft.com/en-us/windows/wsl/install). -## Scenario +**To deploy the Loki stack locally, follow these steps:** -In this scenario, we have a microservices application called the Carnivorous Greenhouse. This application consists of the following services: +1. Clone the Loki fundamentals repository and checkout the getting-started branch: -- **User Service:** Manages user data and authentication for the application. Such as creating users and logging in. -- **Plant Service:** Manages the creation of new plants and updates other services when a new plant is created. -- **Simulation Service:** Generates sensor data for each plant. -- **Websocket Service:** Manages the websocket connections for the application. -- **Bug Service:** A service that when enabled, randomly causes services to fail and generate additional logs. -- **Main App:** The main application that ties all the services together. -- **Database:** A database that stores user and plant data. + ```bash + git clone https://github.com/grafana/loki-fundamentals.git -b getting-started + ``` -Each service generates logs using the OpenTelemetry SDK and exports to the OpenTelemetry Collector in the OpenTelemetry format (OTLP). The Collector then ingests the logs and sends them to Loki. +1. Change to the `loki-fundamentals` directory: - + ```bash + cd loki-fundamentals + ``` - +1. With `loki-fundamentals` as the current working directory deploy Loki, Alloy, and Grafana using Docker Compose: -## Step 1: Environment setup - -In this step, we will set up our environment by cloning the repository that contains our demo application and spinning up our observability stack using Docker Compose. - -1. To get started, clone the repository that contains our demo application: - - ```bash - git clone -b microservice-otel-collector https://github.com/grafana/loki-fundamentals.git - ``` - -1. Next we will spin up our observability stack using Docker Compose: - - - ```bash - docker compose -f loki-fundamentals/docker-compose.yml up -d - ``` - - - - - ```bash - docker-compose -f loki-fundamentals/docker-compose.yml up -d - ``` - - - - To check the status of services we can run the following command: - - ```bash - docker ps -a - ``` - - > **Note:** - > The OpenTelemetry Collector container will show as `Stopped` or `Exited (1) About a minute ago`. This is expected as we have provided an empty configuration file. We will update this file in the next step. - - -After we've finished configuring the OpenTelemetry Collector and sending logs to Loki, we will be able to view the logs in Grafana. To check if Grafana is up and running, navigate to the following URL: [http://localhost:3000](http://localhost:3000) - + ```bash + docker compose up -d + ``` + After running the command, you should see a similar output: - + ```console + ✔ Container loki-fundamentals-grafana-1 Started 0.3s + ✔ Container loki-fundamentals-loki-1 Started 0.3s + ✔ Container loki-fundamentals-alloy-1 Started 0.4s + ``` -## Step 2: Configuring the OpenTelemetry Collector +With the Loki stack running, you can now verify each component is up and running: -To configure the Collector to ingest OpenTelemetry logs from our application, we need to provide a configuration file. This configuration file will define the components and their relationships. We will build the entire observability pipeline within this configuration file. +* **Alloy**: Open a browser and navigate to [http://localhost:12345/graph](http://localhost:12345/graph). You should see the Alloy UI. +* **Grafana**: Open a browser and navigate to [http://localhost:3000](http://localhost:3000). You should see the Grafana home page. +* **Loki**: Open a browser and navigate to [http://localhost:3100/metrics](http://localhost:3100/metrics). You should see the Loki metrics page. -### Open your code editor and locate the `otel-config.yaml` file + -The configuration file is written using **YAML** configuration syntax. To start, we will open the `otel-config.yaml` file in the code editor: + -**Note: Killercoda has an inbuilt Code editor which can be accessed via the `Editor` tab.** +Since Grafana Alloy is configured to tail logs from all Docker containers, Loki should already be receiving logs. The best place to verify log collection is using the Grafana Logs Drilldown feature. To do this, navigate to [http://localhost:3000/a/grafana-lokiexplore-app](http://localhost:3000/a/grafana-lokiexplore-app). You should see the Grafana Logs Drilldown page. -1. Expand the `loki-fundamentals` directory in the file explorer of the `Editor` tab. -2. Locate the `otel-config.yaml` file in the top level directory, `loki-fundamentals`. -3. Click on the `otel-config.yaml` file to open it in the code editor. +{{< figure max-width="100%" src="/media/docs/loki/get-started-drill-down.png" caption="Grafana Logs Drilldown" alt="Grafana Logs Drilldown" >}} - -1. Open the `loki-fundamentals` directory in a code editor of your choice. -1. Locate the `otel-config.yaml` file in the `loki-fundamentals` directory (Top level directory). -1. Click on the `otel-config.yaml` file to open it in the code editor. - +If you have only the getting started demo deployed in your docker environment, you should see three containers and their logs; `loki-fundamentals-alloy-1`, `loki-fundamentals-grafana-1` and `loki-fundamentals-loki-1`. Click **Show Logs** within the `loki-fundamentals-loki-1` container to drill down into the logs for that container. -You will copy all three of the following configuration snippets into the `otel-config.yaml` file. +{{< figure max-width="100%" src="/media/docs/loki/get-started-drill-down-container.png" caption="Grafana Drilldown Service View" alt="Grafana Drilldown Service View" >}} -### Receive OpenTelemetry logs via gRPC and HTTP +We will not cover the rest of the Grafana Logs Drilldown features in this quickstart guide. For more information on how to use the Grafana Logs Drilldown feature, see [the getting started page](https://grafana.com/docs/grafana/latest/explore/simplified-exploration/logs/get-started/). -First, we will configure the OpenTelemetry receiver. `otlp:` accepts logs in the OpenTelemetry format via HTTP and gRPC. We will use this receiver to receive logs from the Carnivorous Greenhouse application. + -Now add the following configuration to the `otel-config.yaml` file: + -```yaml -# Receivers -receivers: - otlp: - protocols: - grpc: - endpoint: 0.0.0.0:4317 - http: - endpoint: 0.0.0.0:4318 -``` +## Collect logs from a sample application -In this configuration: +Currently, the Loki stack is collecting logs about itself. To provide a more realistic example, you can deploy a sample application that generates logs. The sample application is called **The Carnivourous Greenhouse**, a microservices application that allows users to login and simulate a greenhouse with carnivorous plants to monitor. The application consists of seven services: +- **User Service:** Manages user data and authentication for the application. Such as creating users and logging in. +- **Plant Service:** Manages the creation of new plants and updates other services when a new plant is created. +- **Simulation Service:** Generates sensor data for each plant. +- **WebSocket Service:** Manages the websocket connections for the application. +- **Bug Service:** A service that when enabled, randomly causes services to fail and generate additional logs. +- **Main App:** The main application that ties all the services together. +- **Database:** A PostgreSQL database that stores user and plant data. -- `receivers`: The list of receivers to receive telemetry data. In this case, we are using the `otlp` receiver. -- `otlp`: The OpenTelemetry receiver that accepts logs in the OpenTelemetry format. -- `protocols`: The list of protocols that the receiver supports. In this case, we are using `grpc` and `http`. -- `grpc`: The gRPC protocol configuration. The receiver will accept logs via gRPC on `4317`. -- `http`: The HTTP protocol configuration. The receiver will accept logs via HTTP on `4318`. -- `endpoint`: The IP address and port number to listen on. In this case, we are listening on all IP addresses on port `4317` for gRPC and port `4318` for HTTP. +The architecture of the application is shown below: -For more information on the `otlp` receiver configuration, see the [OpenTelemetry Receiver OTLP documentation](https://github.com/open-telemetry/opentelemetry-collector/blob/main/receiver/otlpreceiver/README.md). +{{< figure max-width="100%" src="/media/docs/loki/get-started-architecture.png" caption="Sample Microservice Architecture" alt="Sample Microservice Architecture" >}} -### Create batches of logs using a OpenTelemetry processor +To deploy the sample application, follow these steps: -Next add the following configuration to the `otel-config.yaml` file: +1. With `loki-fundamentals` as the current working directory, deploy the sample application using Docker Compose: -```yaml -# Processors -processors: - batch: -``` + ```bash + docker compose -f greenhouse/docker-compose-micro.yml up -d --build + ``` + > **Note:** + > This may take a few minutes to complete since the images for the sample application need to be built. Go grab a coffee and come back. -In this configuration: + Once the command completes, you should see a similar output: -- `processors`: The list of processors to process telemetry data. In this case, we are using the `batch` processor. -- `batch`: The OpenTelemetry processor that accepts telemetry data from other `otelcol` components and places them into batches. + ```console + ✔ bug_service Built 0.0s + ✔ main_app Built 0.0s + ✔ plant_service Built 0.0s + ✔ simulation_service Built 0.0s + ✔ user_service Built 0.0s + ✔ websocket_service Built 0.0s + ✔ Container greenhouse-websocket_service-1 Started 0.7s + ✔ Container greenhouse-db-1 Started 0.7s + ✔ Container greenhouse-user_service-1 Started 0.8s + ✔ Container greenhouse-bug_service-1 Started 0.8s + ✔ Container greenhouse-plant_service-1 Started 0.8s + ✔ Container greenhouse-simulation_service-1 Started 0.7s + ✔ Container greenhouse-main_app-1 Started 0.7s + ``` -For more information on the `batch` processor configuration, see the [OpenTelemetry Processor Batch documentation](https://github.com/open-telemetry/opentelemetry-collector/blob/main/processor/batchprocessor/README.md). +2. To verify the sample application is running, open a browser and navigate to [http://localhost:5005](http://localhost:5005). You should see the login page for the Carnivorous Greenhouse application. -### Export logs to Loki using a OpenTelemetry exporter + {{< figure max-width="100%" src="/media/docs/loki/get-started-login.png" caption="Greenhouse Home Page" alt="Greenhouse Home Page" >}} -We will use the `otlphttp/logs` exporter to send the logs to the Loki native OTLP endpoint. Add the following configuration to the `otel-config.yaml` file: + Now that the sample application is running, run some actions in the application to generate logs. Here is a list of actions: + 1. **Create a user:** Click **Sign Up** and create a new user. Add a username and password and click **Sign Up**. + 1. **Login:** Use the username and password you created to login. Add the username and password and click **Login**. + 1. **Create a plant:** Once logged in, give your plant a name, select a plant type and click **Add Plant**. Do this a few times if you like. -```yaml -# Exporters -exporters: - otlphttp/logs: - endpoint: "http://loki:3100/otlp" - tls: - insecure: true -``` + Your greenhouse should look something like this: -In this configuration: + {{< figure max-width="100%" src="/media/docs/loki/get-started-greenhouse.png" caption="Greenhouse Dashboard" alt="Greenhouse Dashboard" >}} -- `exporters`: The list of exporters to export telemetry data. In this case, we are using the `otlphttp/logs` exporter. -- `otlphttp/logs`: The OpenTelemetry exporter that accepts telemetry data from other `otelcol` components and writes them over the network using the OTLP HTTP protocol. -- `endpoint`: The URL to send the telemetry data to. In this case, we are sending the logs to the Loki native OTLP endpoint at `http://loki:3100/otlp`. -- `tls`: The TLS configuration for the exporter. In this case, we are setting `insecure` to `true` to disable TLS verification. -- `insecure`: Disables TLS verification. This is set to `true` as we are using an insecure connection. - -For more information on the `otlphttp/logs` exporter configuration, see the [OpenTelemetry Exporter OTLP HTTP documentation](https://github.com/open-telemetry/opentelemetry-collector/blob/main/exporter/otlphttpexporter/README.md) + Now that you have generated some logs, you can return to the Grafana Logs Drilldown page [http://localhost:3000/a/grafana-lokiexplore-app](http://localhost:3000/a/grafana-lokiexplore-app). You should see seven new services such as `greenhouse-main_app-1`, `greenhouse-plant_service-1`, `greenhouse-user_service-1`, etc. + -### Creating the pipeline + -Now that we have configured the receiver, processor, and exporter, we need to create a pipeline to connect these components. Add the following configuration to the `otel-config.yaml` file: +## Querying logs -```yaml -# Pipelines -service: - pipelines: - logs: - receivers: [otlp] - processors: [batch] - exporters: [otlphttp/logs] -``` +At this point, you have viewed logs using the Grafana Logs Drilldown feature. In many cases this will provide you with all the information you need. However, we can also manually query Loki to ask more advanced questions about the logs. This can be done via **Grafana Explore**. -In this configuration: +1. Open a browser and navigate to [http://localhost:3000](http://localhost:3000) to open Grafana. -- `pipelines`: The list of pipelines to connect the receiver, processor, and exporter. In this case, we are using the `logs` pipeline but there is also pipelines for metrics, traces, and continuous profiling. -- `receivers`: The list of receivers to receive telemetry data. In this case, we are using the `otlp` receiver component we created earlier. -- `processors`: The list of processors to process telemetry data. In this case, we are using the `batch` processor component we created earlier. -- `exporters`: The list of exporters to export telemetry data. In this case, we are using the `otlphttp/logs` component exporter we created earlier. +1. From the Grafana main menu, click the **Explore** icon (1) to open the Explore tab. -### Load the configuration + To learn more about Explore, refer to the [Explore](https://grafana.com/docs/grafana/latest/explore/) documentation. -Before you load the configuration into the OpenTelemetry Collector, compare your configuration with the completed configuration below: + {{< figure src="/media/docs/loki/grafana-query-builder-v2.png" caption="Grafana Explore" alt="Grafana Explore" >}} -```yaml -# Receivers -receivers: - otlp: - protocols: - grpc: - endpoint: 0.0.0.0:4317 - http: - endpoint: 0.0.0.0:4318 - -# Processors -processors: - batch: - -# Exporters -exporters: - otlphttp/logs: - endpoint: "http://loki:3100/otlp" - tls: - insecure: true - -# Pipelines -service: - pipelines: - logs: - receivers: [otlp] - processors: [batch] - exporters: [otlphttp/logs] -``` +1. From the menu in the dashboard header, select the Loki data source (2). -Next, we need apply the configuration to the OpenTelemetry Collector. To do this, we will restart the OpenTelemetry Collector container: - -```bash -docker restart loki-fundamentals_otel-collector_1 -``` - + This displays the Loki query editor. -This will restart the OpenTelemetry Collector container with the new configuration. You can check the logs of the OpenTelemetry Collector container to see if the configuration was loaded successfully: - -```bash -docker logs loki-fundamentals_otel-collector_1 -``` + In the query editor you use the Loki query language, [LogQL](https://grafana.com/docs/loki//query/), to query your logs. + To learn more about the query editor, refer to the [query editor documentation](https://grafana.com/docs/grafana/latest/datasources/loki/query-editor/). -Within the logs, you should see the following message: +1. The Loki query editor has two modes (3): -```console -2024-08-02T13:10:25.136Z info service@v0.106.1/service.go:225 Everything is ready. Begin running and processing data. -``` + - [Builder mode](https://grafana.com/docs/grafana/latest/datasources/loki/query-editor/#builder-mode), which provides a visual query designer. + - [Code mode](https://grafana.com/docs/grafana/latest/datasources/loki/query-editor/#code-mode), which provides a feature-rich editor for writing LogQL queries. -## Stuck? Need help? + Next we’ll walk through a few queries using the code view. -If you get stuck or need help creating the configuration, you can copy and replace the entire `otel-config.yaml` using the completed configuration file: +1. Click **Code** (3) to work in Code mode in the query editor. - -```bash -cp loki-fundamentals/completed/otel-config.yaml loki-fundamentals/otel-config.yaml -docker restart loki-fundamentals_otel-collector_1 -``` - + Here are some sample queries to get you started using LogQL. After copying any of these queries into the query editor, click **Run Query** (4) to execute the query. - + 1. View all the log lines which have the `container` label value `greenhouse-main_app-1`: + + ```bash + {container="greenhouse-main_app-1"} + ``` + + In Loki, this is a log stream. - + Loki uses [labels](https://grafana.com/docs/loki//get-started/labels/) as metadata to describe log streams. -## Step 3: Start the Carnivorous Greenhouse + Loki queries always start with a label selector. + In the previous query, the label selector is `{container="greenhouse-main_app-1"}`. -In this step, we will start the Carnivorous Greenhouse application. To start the application, run the following command: - -> **Note:** -> This docker-compose file relies on the `loki-fundamentals_loki` docker network. If you have not started the observability stack, you will need to start it first. - + + 2. Find all the log lines in the `{container="greenhouse-main_app-1"}` stream that contain the string `POST`: + + ```bash + {container="greenhouse-main_app-1"} |= "POST" + ``` + + -**Note: This docker-compose file relies on the `loki-fundamentals_loki` docker network. If you have not started the observability stack, you will need to start it first.** + +### Extracting attributes from logs - +Loki by design does not force log lines into a specific schema format. Whether you are using JSON, key-value pairs, plain text, Logfmt, or any other format, Loki ingests these logs lines as a stream of characters. The sample application we are using stores logs in [Logfmt](https://brandur.org/logfmt) format: + ```bash -docker compose -f loki-fundamentals/greenhouse/docker-compose-micro.yml up -d --build +ts=2025-02-21 16:09:42,176 level=INFO line=97 msg="192.168.65.1 - - [21/Feb/2025 16:09:42] "GET /static/style.css HTTP/1.1" 304 -" ``` - +To break this down: +- `ts=2025-02-21 16:09:42,176` is the timestamp of the log line. +- `level=INFO` is the log level. +- `line=97` is the line number in the code. +- `msg="192.168.65.1 - - [21/Feb/2025 16:09:42] "GET /static/style.css HTTP/1.1" 304 -"` is the log message. + + +When querying Loki, you can pipe the result of the label selector through a formatter. This extracts attributes from the log line for further processing. For example lets pipe `{container="greenhouse-main_app-1"}` through the `logfmt` formatter to extract the `level` and `line` attributes: + +```bash +{container="greenhouse-main_app-1"} | logfmt +``` + +When you now expand a log line in the query result, you will see the extracted attributes. +> **Tip:** +> **Before we move on** to the next section, let's generate some error logs. To do this, enable the bug service in the sample application. This is done by setting the `Toggle Error Mode` to `On` in the Carnivorous Greenhouse application. This will cause the bug service to randomly cause services to fail. + +### Advanced and Metrics Queries - +With Error Mode enabled the bug service will start causing services to fail, in these next few LogQL examples we will track down some of these errors. Lets start by parsing the logs to extract the `level` attribute and then filter for logs with a `level` of `ERROR`: + ```bash -docker-compose -f loki-fundamentals/greenhouse/docker-compose-micro.yml up -d --build +{container="greenhouse-plant_service-1"} | logfmt | level="ERROR" ``` - + +This query will return all the logs from the `greenhouse-plant_service-1` container that have a `level` attribute of `ERROR`. You can further refine this query by filtering for a specific code line: + +```bash +{container="greenhouse-plant_service-1"} | logfmt | level="ERROR", line="58" +``` + +This query will return all the logs from the `greenhouse-plant_service-1` container that have a `level` attribute of `ERROR` and a `line` attribute of `58`. +LogQL also supports metrics queries. Metrics are useful for abstracting the raw log data aggregating attributes into numeric values. This allows you to utilise more visualization options in Grafana as well as generate alerts on your logs. -This will start the following services: +For example, you can use a metric query to count the number of logs per second that have a specific attribute: + +```bash +sum(rate({container="greenhouse-plant_service-1"} | logfmt | level="ERROR" [$__auto])) +``` + -```console - ✔ Container greenhouse-db-1 Started - ✔ Container greenhouse-websocket_service-1 Started - ✔ Container greenhouse-bug_service-1 Started - ✔ Container greenhouse-user_service-1 Started - ✔ Container greenhouse-plant_service-1 Started - ✔ Container greenhouse-simulation_service-1 Started - ✔ Container greenhouse-main_app-1 Started +Another example is to get the top 10 services producing the highest rate of errors: + +```bash +topk(10,sum(rate({level="error"} | logfmt [5m])) by (service_name)) ``` + +> **Note:** +> `service_name` is a label created by Loki when no service name is provided in the log line. It will use the container name as the service name. A list of all labels can be found in [Labels](https://grafana.com/docs/loki/latest/get-started/labels/#default-labels-for-all-users). -Once started, you can access the Carnivorous Greenhouse application at [http://localhost:5005](http://localhost:5005). Generate some logs by interacting with the application in the following ways: +Finally, lets take a look at the total log throughput of each container in our production environment: + +```bash +sum by (service_name) (rate({env="production"} | logfmt [$__auto])) +``` + +This is made possible by the `service_name` label and the `env` label that we have added to our log lines. + + + + + +## A look under the hood + +At this point you will have a running Loki Stack and a sample application generating logs. You have also queried Loki using Grafana Logs Drilldown and Grafana Explore. +In this next section we will take a look under the hood to understand how the Loki stack has been configured to collect logs, the Loki configuration file, and how the Loki datasource has been configured in Grafana. + +### Grafana Alloy configuration + +Grafana Alloy is collecting logs from all the docker containers and forwarding them to Loki. +It needs a configuration file to know which logs to collect and where to forward them to. Within the `loki-fundamentals` directory, you will find a file called `config.alloy`: + +```alloy +// This component is responsible for disovering new containers within the docker environment +discovery.docker "getting_started" { + host = "unix:///var/run/docker.sock" + refresh_interval = "5s" +} + +// This component is responsible for relabeling the discovered containers +discovery.relabel "getting_started" { + targets = [] + + rule { + source_labels = ["__meta_docker_container_name"] + regex = "/(.*)" + target_label = "container" + } +} + +// This component is responsible for collecting logs from the discovered containers +loki.source.docker "getting_started" { + host = "unix:///var/run/docker.sock" + targets = discovery.docker.getting_started.targets + forward_to = [loki.process.getting_started.receiver] + relabel_rules = discovery.relabel.getting_started.rules + refresh_interval = "5s" +} + +// This component is responsible for processing the logs (In this case adding static labels) +loki.process "getting_started" { + stage.static_labels { + values = { + env = "production", + } +} + forward_to = [loki.write.getting_started.receiver] +} + +// This component is responsible for writing the logs to Loki +loki.write "getting_started" { + endpoint { + url = "http://loki:3100/loki/api/v1/push" + } +} + +// Enables the ability to view logs in the Alloy UI in realtime +livedebugging { + enabled = true +} +``` +This configuration file can be viewed visually via the Alloy UI at [http://localhost:12345/graph](http://localhost:12345/graph). -1. Create a user. -1. Log in. -1. Create a few plants to monitor. -1. Enable bug mode to activate the bug service. This will cause services to fail and generate additional logs. +{{< figure max-width="100%" src="/media/docs/loki/getting-started-alloy-ui.png" caption="Alloy UI" alt="Alloy UI" >}} -Finally to view the logs in Loki, navigate to the Loki Logs Explore view in Grafana at [http://localhost:3000/a/grafana-lokiexplore-app/explore](http://localhost:3000/a/grafana-lokiexplore-app/explore). +In this view you can see the components of the Alloy configuration file and how they are connected: +* **discovery.docker**: This component queries the metadata of the docker enviroment via the docker socket and discovers new containers, aswell as providing metdata about the containers. +* **discovery.relabel**: This component converts a metadata (`__meta_docker_container_name`) label into a Loki label (`container`). +* **loki.source.docker**: This component collects logs from the discovered containers and forwards them to the next component. It requests the metadata from the `discovery.docker` component and applies the relabeling rules from the `discovery.relabel` component. +* **loki.process**: This component provides stages for log transformation and extraction. In this case it adds a static label `env=production` to all logs. +* **loki.write**: This component writes the logs to Loki. It forwards the logs to the Loki endpoint `http://loki:3100/loki/api/v1/push`. - +### View Logs in realtime - +Grafana Alloy provides inbuilt realtime log viewer. This allows you to view current log entries and how they are being transformed via specific components of the pipeline. +To view live debugging mode open a browser tab and navigate to: [http://localhost:12345/debug/loki.process.getting_started](http://localhost:12345/debug/loki.process.getting_started). + + -## Summary +## Loki Configuration -In this example, we configured the OpenTelemetry Collector to receive logs from an example application and send them to Loki using the native OTLP endpoint. Make sure to also consult the Loki configuration file `loki-config.yaml` to understand how we have configured Loki to receive logs from the OpenTelemetry Collector. +Grafana Loki requires a configuration file to define how it should run. Within the `loki-fundamentals` directory, you will find a file called `loki-config.yaml`: +```yaml +auth_enabled: false + +server: + http_listen_port: 3100 + grpc_listen_port: 9096 + log_level: info + grpc_server_max_concurrent_streams: 1000 + +common: + instance_addr: 127.0.0.1 + path_prefix: /tmp/loki + storage: + filesystem: + chunks_directory: /tmp/loki/chunks + rules_directory: /tmp/loki/rules + replication_factor: 1 + ring: + kvstore: + store: inmemory + +query_range: + results_cache: + cache: + embedded_cache: + enabled: true + max_size_mb: 100 + +limits_config: + metric_aggregation_enabled: true + allow_structured_metadata: true + volume_enabled: true + retention_period: 24h # 24h + +schema_config: + configs: + - from: 2020-10-24 + store: tsdb + object_store: filesystem + schema: v13 + index: + prefix: index_ + period: 24h + +pattern_ingester: + enabled: true + metric_aggregation: + loki_address: localhost:3100 + +ruler: + enable_alertmanager_discovery: true + enable_api: true + +frontend: + encoding: protobuf -### Back to docs +compactor: + working_directory: /tmp/loki/retention + delete_request_store: filesystem + retention_enabled: true +``` +To summarize the configuration file: +* **auth_enabled**: This is set to false, meaning Loki does not need a [tenant ID](https://grafana.com/docs/loki//operations/multi-tenancy/) for ingest or query. +* **server**: Defines the ports Loki listens on, the log level, and the maximum number of concurrent gRPC streams. +* **common**: Defines the common configuration for Loki. This includes the instance address, storage configuration, replication factor, and ring configuration. +* **query_range**: This is defined to tell Loki to use inbuilt caching for query results. In production environments of Loki this is handled by a seperate cache service such as memcached. +* **limits_config**: Defines the global limits for all Loki tenants. This includes enabling specific features such as metric aggregation and structured metadata. Limits can be defined on a per tenant basis, however this is considered an advanced configuration and for most use cases the global limits are sufficient. +* **schema_config**: Defines the schema configuration for Loki. This includes the schema version, the object store, and the index configuration. +* **pattern_ingester**: Enables pattern ingesters which are used to discover log patterns. Mostly used by Grafana Logs Drilldown. +* **ruler**: Enables the ruler component of Loki. This is used to create alerts based on log queries. +* **frontend**: Defines the encoding format for the frontend. In this case it is set to `protobuf`. +* **compactor**: Defines the compactor configuration. Used to compact the index and mange chunk retention. + +The above configuration file is a basic configuration file for Loki. For more advanced configuration options, refer to the [Loki Configuration](https://grafana.com/docs/loki//configuration/) documentation. + + + + + +### Grafana Loki Datasource + +The final piece of the puzzle is the Grafana Loki datasource. This is used by Grafana to connect to Loki and query the logs. Grafana has multiple ways to define a datasource; +* **Direct**: This is where you define the datasource in the Grafana UI. +* **Provisioning**: This is where you define the datasource in a configuration file and have Grafana automatically create the datasource. +* **API**: This is where you use the Grafana API to create the datasource. + +In this case we are using the provisioning method. Instead of mounting the Grafana configuration directory, we have defined the datasource in the `docker-compose.yml` file: + +```yaml + grafana: + image: grafana/grafana:latest + environment: + - GF_FEATURE_TOGGLES_ENABLE=grafanaManagedRecordingRules + - GF_AUTH_ANONYMOUS_ORG_ROLE=Admin + - GF_AUTH_ANONYMOUS_ENABLED=true + - GF_AUTH_BASIC_ENABLED=false + ports: + - 3000:3000/tcp + entrypoint: + - sh + - -euc + - | + mkdir -p /etc/grafana/provisioning/datasources + cat < /etc/grafana/provisioning/datasources/ds.yaml + apiVersion: 1 + datasources: + - name: Loki + type: loki + access: proxy + orgId: 1 + url: 'http://loki:3100' + basicAuth: false + isDefault: true + version: 1 + editable: true + EOF + /run.sh + networks: + - loki +``` +Within the entrypoint section of the `docker-compose.yml` file, we have defined a file called `run.sh` this runs on startup and creates the datasource configuration file `ds.yaml` in the Grafana provisioning directory. +This file defines the Loki datasource and tells Grafana to use it. Since Loki is running in the same Docker network as Grafana, we can use the service name `loki` as the URL. -Head back to where you started from to continue with the [Loki documentation](https://grafana.com/docs/loki/latest/send-data/otel). + + -## Further reading +## What next? -For more information on the OpenTelemetry Collector and the native OTLP endpoint of Loki, refer to the following resources: +### Back to docs +Head back to where you started from to continue with the Loki documentation: [Loki documentation](https://grafana.com/docs/loki/latest/get-started/quick-start/). -- [Loki OTLP endpoint](https://grafana.com/docs/loki//send-data/otel/) -- [How is native OTLP endpoint different from Loki Exporter](https://grafana.com/docs/loki//send-data/otel/native_otlp_vs_loki_exporter) -- [OpenTelemetry Collector Configuration](https://opentelemetry.io/docs/collector/configuration/) +You have completed the Loki Quickstart demo. So where to go next? Here are a few suggestions: +* **Deploy:** Loki can be deployed in multiple ways. For production usecases we recommend deploying Loki via the [Helm chart](https://grafana.com/docs/loki//setup/install/helm/). +* **Send Logs:** In this example we used Grafana Alloy to collect and send logs to Loki. However there are many other methods you can use depending upon your needs. For more information see [send data](https://grafana.com/docs/loki/next/send-data/). +* **Query Logs:** LogQL is an extensive query language for logs and contains many tools to improve log retrival and generate insights. For more information see the [Query section](https://grafana.com/docs/loki//query/). +* **Alert:** Lastly you can use the ruler component of Loki to create alerts based on log queries. For more information see [Alerting](https://grafana.com/docs/loki//alert/). -## Complete metrics, logs, traces, and profiling example +### Complete metrics, logs, traces, and profiling example -If you would like to use a demo that includes Mimir, Loki, Tempo, and Grafana, you can use [Introduction to Metrics, Logs, Traces, and Profiling in Grafana](https://github.com/grafana/intro-to-mlt). `Intro-to-mltp` provides a self-contained environment for learning about Mimir, Loki, Tempo, and Grafana. +If you would like to run a demonstration environment that includes Mimir, Loki, Tempo, and Grafana, you can use [Introduction to Metrics, Logs, Traces, and Profiling in Grafana](https://github.com/grafana/intro-to-mlt). +It's a self-contained environment for learning about Mimir, Loki, Tempo, and Grafana. -The project includes detailed explanations of each component and annotated configurations for a single-instance deployment. Data from `intro-to-mltp` can also be pushed to Grafana Cloud. +The project includes detailed explanations of each component and annotated configurations for a single-instance deployment. +You can also push the data from the environment to [Grafana Cloud](https://grafana.com/cloud/). - - \ No newline at end of file + \ No newline at end of file diff --git a/workshops/course-tracker-test/setup.sh b/workshops/course-tracker-test/setup.sh new file mode 100644 index 0000000..2e76a32 --- /dev/null +++ b/workshops/course-tracker-test/setup.sh @@ -0,0 +1,76 @@ +#!/bin/bash + +# Define variables +COURSE="course-tracker-test" +VM_UUID=$(cat /sys/class/dmi/id/product_uuid) +BIN_DIR="/usr/local/bin" +SERVICE_NAME="course-monitor" +BINARY_NAME="alloy-linux-amd64" +CONFIG_NAME="config.alloy" +DOWNLOAD_URL="https://github.com/grafana/alloy/releases/download/v1.6.1/alloy-linux-amd64.zip" +CONFIG_URL="https://raw.githubusercontent.com/grafana/killercoda/refs/heads/staging/tools/course-tracker/config.alloy" +SERVICE_FILE="/etc/systemd/system/${SERVICE_NAME}.service" +CUSTOM_ARGS="--server.http.listen-addr=0.0.0.0:12346" + +set -euf +# shellcheck disable=SC3040 +(set -o pipefail 2> /dev/null) && set -o pipefail && sudo install -m 0755 -d /etc/apt/keyrings && \ +sudo curl -fsSL https://download.docker.com/linux/ubuntu/gpg -o /etc/apt/keyrings/docker.asc && \ +sudo chmod a+r /etc/apt/keyrings/docker.asc && \ +ARCH="$(dpkg --print-architecture)" && \ +VERSION_CODENAME="$(source /etc/os-release && echo "${VERSION_CODENAME}")" && \ +readonly ARCH VERSION_CODENAME && \ +printf 'deb [arch=%s signed-by=/etc/apt/keyrings/docker.asc] https://download.docker.com/linux/ubuntu %s stable' "${ARCH}" "${VERSION_CODENAME}" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null && \ +sudo apt-get update && \ +sudo apt-get install -y docker-compose-plugin && \ + +# Create a temporary directory for the download +TMP_DIR=$(mktemp -d) && \ +cd "$TMP_DIR" || exit 1 && \ + +# Download and unzip Alloy +echo "Downloading Alloy..." && \ +wget -q "$DOWNLOAD_URL" -O alloy.zip && \ +unzip -q alloy.zip && \ + +# Move the binary to /usr/local/bin and make it executable +echo "Installing Alloy..." && \ +sudo mv "$BINARY_NAME" "$BIN_DIR/$BINARY_NAME" && \ +sudo chmod +x "$BIN_DIR/$BINARY_NAME" && \ + +# Download the configuration file +echo "Downloading configuration..." && \ +sudo wget -q "$CONFIG_URL" -O "/etc/$CONFIG_NAME" && \ + +# Create the systemd service +echo "Creating systemd service..." && \ +sudo bash -c "cat < $SERVICE_FILE +[Unit] +Description=Course Monitor Service +After=network.target + +[Service] +ExecStart=$BIN_DIR/$BINARY_NAME $CUSTOM_ARGS run /etc/$CONFIG_NAME +Restart=always +User=root +WorkingDirectory=$BIN_DIR +StandardOutput=journal +StandardError=journal +LimitNOFILE=65536 +Environment=VM_UUID=$VM_UUID +Environment=COURSE=$COURSE + +[Install] +WantedBy=multi-user.target +EOF" && \ + + +# Reload systemd, enable and start the service +echo "Enabling and starting the service..." && \ +sudo systemctl daemon-reload && \ +sudo systemctl enable "$SERVICE_NAME" && \ +sudo systemctl start "$SERVICE_NAME" && \ +export PROMPT_COMMAND='history -a' && \ + +echo "Service $SERVICE_NAME has been installed and started successfully." && cd /root && \ +clear && echo "Setup complete. You may now begin the tutorial." diff --git a/workshops/course-tracker-test/step1.md b/workshops/course-tracker-test/step1.md index 80f1742..4e09298 100644 --- a/workshops/course-tracker-test/step1.md +++ b/workshops/course-tracker-test/step1.md @@ -1,23 +1,37 @@ -# Step 1: Environment setup +# Deploy the Loki stack -In this step, we will set up our environment by cloning the repository that contains our demo application and spinning up our observability stack using Docker Compose. +**To deploy the Loki stack locally, follow these steps:** -1. To get started, clone the repository that contains our demo application: +1. Clone the Loki fundamentals repository and checkout the getting-started branch: ```bash - git clone -b microservice-otel-collector https://github.com/grafana/loki-fundamentals.git + git clone https://github.com/grafana/loki-fundamentals.git -b getting-started ```{{exec}} -1. Next we will spin up our observability stack using Docker Compose: +1. Change to the `loki-fundamentals`{{copy}} directory: ```bash - docker-compose -f loki-fundamentals/docker-compose.yml up -d + cd loki-fundamentals ```{{exec}} - To check the status of services we can run the following command: +1. With `loki-fundamentals`{{copy}} as the current working directory deploy Loki, Alloy, and Grafana using Docker Compose: ```bash - docker ps -a + docker compose up -d ```{{exec}} -After we’ve finished configuring the OpenTelemetry Collector and sending logs to Loki, we will be able to view the logs in Grafana. To check if Grafana is up and running, navigate to the following URL: [http://localhost:3000]({{TRAFFIC_HOST1_3000}}) + After running the command, you should see a similar output: + + ```console + ✔ Container loki-fundamentals-grafana-1 Started 0.3s + ✔ Container loki-fundamentals-loki-1 Started 0.3s + ✔ Container loki-fundamentals-alloy-1 Started 0.4s + ```{{copy}} + +With the Loki stack running, you can now verify each component is up and running: + +- **Alloy**: Open a browser and navigate to [http://localhost:12345/graph]({{TRAFFIC_HOST1_12345}}/graph). You should see the Alloy UI. + +- **Grafana**: Open a browser and navigate to [http://localhost:3000]({{TRAFFIC_HOST1_3000}}). You should see the Grafana home page. + +- **Loki**: Open a browser and navigate to [http://localhost:3100/metrics]({{TRAFFIC_HOST1_3100}}/metrics). You should see the Loki metrics page. diff --git a/workshops/course-tracker-test/step2.md b/workshops/course-tracker-test/step2.md index afa0071..a36571c 100644 --- a/workshops/course-tracker-test/step2.md +++ b/workshops/course-tracker-test/step2.md @@ -1,180 +1,9 @@ -# Step 2: Configuring the OpenTelemetry Collector +Since Grafana Alloy is configured to tail logs from all Docker containers, Loki should already be receiving logs. The best place to verify log collection is using the Grafana Logs Drilldown feature. To do this, navigate to [http://localhost:3000/a/grafana-lokiexplore-app]({{TRAFFIC_HOST1_3000}}/a/grafana-lokiexplore-app). You should see the Grafana Logs Drilldown page. -To configure the Collector to ingest OpenTelemetry logs from our application, we need to provide a configuration file. This configuration file will define the components and their relationships. We will build the entire observability pipeline within this configuration file. +![Grafana Logs Drilldown](https://grafana.com/media/docs/loki/get-started-drill-down.png) -## Open your code editor and locate the `otel-config.yaml`{{copy}} file +If you have only the getting started demo deployed in your docker environment, you should see three containers and their logs; `loki-fundamentals-alloy-1`{{copy}}, `loki-fundamentals-grafana-1`{{copy}} and `loki-fundamentals-loki-1`{{copy}}. Click **Show Logs** within the `loki-fundamentals-loki-1`{{copy}} container to drill down into the logs for that container. -The configuration file is written using **YAML** configuration syntax. To start, we will open the `otel-config.yaml`{{copy}} file in the code editor: +![Grafana Drilldown Service View](https://grafana.com/media/docs/loki/get-started-drill-down-container.png) -**Note: Killercoda has an inbuilt Code editor which can be accessed via the `Editor`{{copy}} tab.** - -1. Expand the `loki-fundamentals`{{copy}} directory in the file explorer of the `Editor`{{copy}} tab. - -1. Locate the `otel-config.yaml`{{copy}} file in the top level directory, `loki-fundamentals`{{copy}}. - -1. Click on the `otel-config.yaml`{{copy}} file to open it in the code editor. - -You will copy all three of the following configuration snippets into the `otel-config.yaml`{{copy}} file. - -## Receive OpenTelemetry logs via gRPC and HTTP - -First, we will configure the OpenTelemetry receiver. `otlp:`{{copy}} accepts logs in the OpenTelemetry format via HTTP and gRPC. We will use this receiver to receive logs from the Carnivorous Greenhouse application. - -Now add the following configuration to the `otel-config.yaml`{{copy}} file: - -```yaml -# Receivers -receivers: - otlp: - protocols: - grpc: - endpoint: 0.0.0.0:4317 - http: - endpoint: 0.0.0.0:4318 -```{{copy}} - -In this configuration: - -- `receivers`{{copy}}: The list of receivers to receive telemetry data. In this case, we are using the `otlp`{{copy}} receiver. - -- `otlp`{{copy}}: The OpenTelemetry receiver that accepts logs in the OpenTelemetry format. - -- `protocols`{{copy}}: The list of protocols that the receiver supports. In this case, we are using `grpc`{{copy}} and `http`{{copy}}. - -- `grpc`{{copy}}: The gRPC protocol configuration. The receiver will accept logs via gRPC on `4317`{{copy}}. - -- `http`{{copy}}: The HTTP protocol configuration. The receiver will accept logs via HTTP on `4318`{{copy}}. - -- `endpoint`{{copy}}: The IP address and port number to listen on. In this case, we are listening on all IP addresses on port `4317`{{copy}} for gRPC and port `4318`{{copy}} for HTTP. - -For more information on the `otlp`{{copy}} receiver configuration, see the [OpenTelemetry Receiver OTLP documentation](https://github.com/open-telemetry/opentelemetry-collector/blob/main/receiver/otlpreceiver/README.md). - -## Create batches of logs using a OpenTelemetry processor - -Next add the following configuration to the `otel-config.yaml`{{copy}} file: - -```yaml -# Processors -processors: - batch: -```{{copy}} - -In this configuration: - -- `processors`{{copy}}: The list of processors to process telemetry data. In this case, we are using the `batch`{{copy}} processor. - -- `batch`{{copy}}: The OpenTelemetry processor that accepts telemetry data from other `otelcol`{{copy}} components and places them into batches. - -For more information on the `batch`{{copy}} processor configuration, see the [OpenTelemetry Processor Batch documentation](https://github.com/open-telemetry/opentelemetry-collector/blob/main/processor/batchprocessor/README.md). - -## Export logs to Loki using a OpenTelemetry exporter - -We will use the `otlphttp/logs`{{copy}} exporter to send the logs to the Loki native OTLP endpoint. Add the following configuration to the `otel-config.yaml`{{copy}} file: - -```yaml -# Exporters -exporters: - otlphttp/logs: - endpoint: "http://loki:3100/otlp" - tls: - insecure: true -```{{copy}} - -In this configuration: - -- `exporters`{{copy}}: The list of exporters to export telemetry data. In this case, we are using the `otlphttp/logs`{{copy}} exporter. - -- `otlphttp/logs`{{copy}}: The OpenTelemetry exporter that accepts telemetry data from other `otelcol`{{copy}} components and writes them over the network using the OTLP HTTP protocol. - -- `endpoint`{{copy}}: The URL to send the telemetry data to. In this case, we are sending the logs to the Loki native OTLP endpoint at `http://loki:3100/otlp`{{copy}}. - -- `tls`{{copy}}: The TLS configuration for the exporter. In this case, we are setting `insecure`{{copy}} to `true`{{copy}} to disable TLS verification. - -- `insecure`{{copy}}: Disables TLS verification. This is set to `true`{{copy}} as we are using an insecure connection. - -For more information on the `otlphttp/logs`{{copy}} exporter configuration, see the [OpenTelemetry Exporter OTLP HTTP documentation](https://github.com/open-telemetry/opentelemetry-collector/blob/main/exporter/otlphttpexporter/README.md) - -## Creating the pipeline - -Now that we have configured the receiver, processor, and exporter, we need to create a pipeline to connect these components. Add the following configuration to the `otel-config.yaml`{{copy}} file: - -```yaml -# Pipelines -service: - pipelines: - logs: - receivers: [otlp] - processors: [batch] - exporters: [otlphttp/logs] -```{{copy}} - -In this configuration: - -- `pipelines`{{copy}}: The list of pipelines to connect the receiver, processor, and exporter. In this case, we are using the `logs`{{copy}} pipeline but there is also pipelines for metrics, traces, and continuous profiling. - -- `receivers`{{copy}}: The list of receivers to receive telemetry data. In this case, we are using the `otlp`{{copy}} receiver component we created earlier. - -- `processors`{{copy}}: The list of processors to process telemetry data. In this case, we are using the `batch`{{copy}} processor component we created earlier. - -- `exporters`{{copy}}: The list of exporters to export telemetry data. In this case, we are using the `otlphttp/logs`{{copy}} component exporter we created earlier. - -## Load the configuration - -Before you load the configuration into the OpenTelemetry Collector, compare your configuration with the completed configuration below: - -```yaml -# Receivers -receivers: - otlp: - protocols: - grpc: - endpoint: 0.0.0.0:4317 - http: - endpoint: 0.0.0.0:4318 - -# Processors -processors: - batch: - -# Exporters -exporters: - otlphttp/logs: - endpoint: "http://loki:3100/otlp" - tls: - insecure: true - -# Pipelines -service: - pipelines: - logs: - receivers: [otlp] - processors: [batch] - exporters: [otlphttp/logs] -```{{copy}} - -Next, we need apply the configuration to the OpenTelemetry Collector. To do this, we will restart the OpenTelemetry Collector container: - -```bash -docker restart loki-fundamentals_otel-collector_1 -```{{exec}} - -This will restart the OpenTelemetry Collector container with the new configuration. You can check the logs of the OpenTelemetry Collector container to see if the configuration was loaded successfully: - -```bash -docker logs loki-fundamentals_otel-collector_1 -```{{exec}} - -Within the logs, you should see the following message: - -```console -2024-08-02T13:10:25.136Z info service@v0.106.1/service.go:225 Everything is ready. Begin running and processing data. -```{{exec}} - -# Stuck? Need help? - -If you get stuck or need help creating the configuration, you can copy and replace the entire `otel-config.yaml`{{copy}} using the completed configuration file: - -```bash -cp loki-fundamentals/completed/otel-config.yaml loki-fundamentals/otel-config.yaml -docker restart loki-fundamentals_otel-collector_1 -```{{exec}} +We will not cover the rest of the Grafana Logs Drilldown features in this quickstart guide. For more information on how to use the Grafana Logs Drilldown feature, see [the getting started page](https://grafana.com/docs/grafana/latest/explore/simplified-exploration/logs/get-started/). diff --git a/workshops/course-tracker-test/step3.md b/workshops/course-tracker-test/step3.md index 2c14635..8a509d7 100644 --- a/workshops/course-tracker-test/step3.md +++ b/workshops/course-tracker-test/step3.md @@ -1,33 +1,68 @@ -# Step 3: Start the Carnivorous Greenhouse +# Collect logs from a sample application -In this step, we will start the Carnivorous Greenhouse application. To start the application, run the following command: +Currently, the Loki stack is collecting logs about itself. To provide a more realistic example, you can deploy a sample application that generates logs. The sample application is called **The Carnivourous Greenhouse**, a microservices application that allows users to login and simulate a greenhouse with carnivorous plants to monitor. The application consists of seven services: -**Note: This docker-compose file relies on the `loki-fundamentals_loki`{{copy}} docker network. If you have not started the observability stack, you will need to start it first.** +- **User Service:** Manages user data and authentication for the application. Such as creating users and logging in. -```bash -docker-compose -f loki-fundamentals/greenhouse/docker-compose-micro.yml up -d --build -```{{exec}} +- **Plant Service:** Manages the creation of new plants and updates other services when a new plant is created. -This will start the following services: +- **Simulation Service:** Generates sensor data for each plant. -```console - ✔ Container greenhouse-db-1 Started - ✔ Container greenhouse-websocket_service-1 Started - ✔ Container greenhouse-bug_service-1 Started - ✔ Container greenhouse-user_service-1 Started - ✔ Container greenhouse-plant_service-1 Started - ✔ Container greenhouse-simulation_service-1 Started - ✔ Container greenhouse-main_app-1 Started -```{{copy}} +- **WebSocket Service:** Manages the websocket connections for the application. -Once started, you can access the Carnivorous Greenhouse application at [http://localhost:5005]({{TRAFFIC_HOST1_5005}}). Generate some logs by interacting with the application in the following ways: +- **Bug Service:** A service that when enabled, randomly causes services to fail and generate additional logs. -1. Create a user. +- **Main App:** The main application that ties all the services together. -1. Log in. +- **Database:** A PostgreSQL database that stores user and plant data. -1. Create a few plants to monitor. +The architecture of the application is shown below: -1. Enable bug mode to activate the bug service. This will cause services to fail and generate additional logs. +![Sample Microservice Architecture](https://grafana.com/media/docs/loki/get-started-architecture.png) -Finally to view the logs in Loki, navigate to the Loki Logs Explore view in Grafana at [http://localhost:3000/a/grafana-lokiexplore-app/explore]({{TRAFFIC_HOST1_3000}}/a/grafana-lokiexplore-app/explore). +To deploy the sample application, follow these steps: + +1. With `loki-fundamentals`{{copy}} as the current working directory, deploy the sample application using Docker Compose: + + ```bash + docker compose -f greenhouse/docker-compose-micro.yml up -d --build + ```{{exec}} + + > **Note:** + > This may take a few minutes to complete since the images for the sample application need to be built. Go grab a coffee and come back. + + Once the command completes, you should see a similar output: + + ```console + ✔ bug_service Built 0.0s + ✔ main_app Built 0.0s + ✔ plant_service Built 0.0s + ✔ simulation_service Built 0.0s + ✔ user_service Built 0.0s + ✔ websocket_service Built 0.0s + ✔ Container greenhouse-websocket_service-1 Started 0.7s + ✔ Container greenhouse-db-1 Started 0.7s + ✔ Container greenhouse-user_service-1 Started 0.8s + ✔ Container greenhouse-bug_service-1 Started 0.8s + ✔ Container greenhouse-plant_service-1 Started 0.8s + ✔ Container greenhouse-simulation_service-1 Started 0.7s + ✔ Container greenhouse-main_app-1 Started 0.7s + ```{{copy}} + +1. To verify the sample application is running, open a browser and navigate to [http://localhost:5005]({{TRAFFIC_HOST1_5005}}). You should see the login page for the Carnivorous Greenhouse application. + + ![Greenhouse Home Page](https://grafana.com/media/docs/loki/get-started-login.png) + + Now that the sample application is running, run some actions in the application to generate logs. Here is a list of actions: + + 1. **Create a user:** Click **Sign Up** and create a new user. Add a username and password and click **Sign Up**. + + 1. **Login:** Use the username and password you created to login. Add the username and password and click **Login**. + + 1. **Create a plant:** Once logged in, give your plant a name, select a plant type and click **Add Plant**. Do this a few times if you like. + +Your greenhouse should look something like this: + +![Greenhouse Dashboard](https://grafana.com/media/docs/loki/get-started-greenhouse.png) + +Now that you have generated some logs, you can return to the Grafana Logs Drilldown page [http://localhost:3000/a/grafana-lokiexplore-app]({{TRAFFIC_HOST1_3000}}/a/grafana-lokiexplore-app). You should see seven new services such as `greenhouse-main_app-1`{{copy}}, `greenhouse-plant_service-1`{{copy}}, `greenhouse-user_service-1`{{copy}}, etc. diff --git a/workshops/course-tracker-test/step4.md b/workshops/course-tracker-test/step4.md new file mode 100644 index 0000000..bb6f64a --- /dev/null +++ b/workshops/course-tracker-test/step4.md @@ -0,0 +1,49 @@ +# Querying logs + +At this point, you have viewed logs using the Grafana Logs Drilldown feature. In many cases this will provide you with all the information you need. However, we can also manually query Loki to ask more advanced questions about the logs. This can be done via **Grafana Explore**. + +1. Open a browser and navigate to [http://localhost:3000]({{TRAFFIC_HOST1_3000}}) to open Grafana. + +1. From the Grafana main menu, click the **Explore** icon (1) to open the Explore tab. + + To learn more about Explore, refer to the [Explore](https://grafana.com/docs/grafana/latest/explore/) documentation. + + ![Grafana Explore](https://grafana.com/media/docs/loki/grafana-query-builder-v2.png) + +1. From the menu in the dashboard header, select the Loki data source (2). + + This displays the Loki query editor. + + In the query editor you use the Loki query language, [LogQL](https://grafana.com/docs/loki/latest/query/), to query your logs. + To learn more about the query editor, refer to the [query editor documentation](https://grafana.com/docs/grafana/latest/datasources/loki/query-editor/). + +1. The Loki query editor has two modes (3): + + - [Builder mode](https://grafana.com/docs/grafana/latest/datasources/loki/query-editor/#builder-mode), which provides a visual query designer. + + - [Code mode](https://grafana.com/docs/grafana/latest/datasources/loki/query-editor/#code-mode), which provides a feature-rich editor for writing LogQL queries. + + Next we’ll walk through a few queries using the code view. + +1. Click **Code** (3) to work in Code mode in the query editor. + + Here are some sample queries to get you started using LogQL. After copying any of these queries into the query editor, click **Run Query** (4) to execute the query. + + 1. View all the log lines which have the `container`{{copy}} label value `greenhouse-main_app-1`{{copy}}: + + ```bash + {container="greenhouse-main_app-1"} + ```{{copy}} + + In Loki, this is a log stream. + + Loki uses [labels](https://grafana.com/docs/loki/latest/get-started/labels/) as metadata to describe log streams. + + Loki queries always start with a label selector. + In the previous query, the label selector is `{container="greenhouse-main_app-1"}`{{copy}}. + + 1. Find all the log lines in the `{container="greenhouse-main_app-1"}`{{copy}} stream that contain the string `POST`{{copy}}: + + ```bash + {container="greenhouse-main_app-1"} |= "POST" + ```{{copy}} diff --git a/workshops/course-tracker-test/step5.md b/workshops/course-tracker-test/step5.md new file mode 100644 index 0000000..fd221e9 --- /dev/null +++ b/workshops/course-tracker-test/step5.md @@ -0,0 +1,69 @@ +# Extracting attributes from logs + +Loki by design does not force log lines into a specific schema format. Whether you are using JSON, key-value pairs, plain text, Logfmt, or any other format, Loki ingests these logs lines as a stream of characters. The sample application we are using stores logs in [Logfmt](https://brandur.org/logfmt) format: + +```bash +ts=2025-02-21 16:09:42,176 level=INFO line=97 msg="192.168.65.1 - - [21/Feb/2025 16:09:42] "GET /static/style.css HTTP/1.1" 304 -" +```{{copy}} + +To break this down: + +- `ts=2025-02-21 16:09:42,176`{{copy}} is the timestamp of the log line. + +- `level=INFO`{{copy}} is the log level. + +- `line=97`{{copy}} is the line number in the code. + +- `msg="192.168.65.1 - - [21/Feb/2025 16:09:42] "GET /static/style.css HTTP/1.1" 304 -"`{{copy}} is the log message. + +When querying Loki, you can pipe the result of the label selector through a formatter. This extracts attributes from the log line for further processing. For example lets pipe `{container="greenhouse-main_app-1"}`{{copy}} through the `logfmt`{{copy}} formatter to extract the `level`{{copy}} and `line`{{copy}} attributes: + +```bash +{container="greenhouse-main_app-1"} | logfmt +```{{copy}} + +When you now expand a log line in the query result, you will see the extracted attributes. + +> **Tip:** +> **Before we move on** to the next section, let’s generate some error logs. To do this, enable the bug service in the sample application. This is done by setting the `Toggle Error Mode`{{copy}} to `On`{{copy}} in the Carnivorous Greenhouse application. This will cause the bug service to randomly cause services to fail. + +# Advanced and Metrics Queries + +With Error Mode enabled the bug service will start causing services to fail, in these next few LogQL examples we will track down some of these errors. Lets start by parsing the logs to extract the `level`{{copy}} attribute and then filter for logs with a `level`{{copy}} of `ERROR`{{copy}}: + +```bash +{container="greenhouse-plant_service-1"} | logfmt | level="ERROR" +```{{copy}} + +This query will return all the logs from the `greenhouse-plant_service-1`{{copy}} container that have a `level`{{copy}} attribute of `ERROR`{{copy}}. You can further refine this query by filtering for a specific code line: + +```bash +{container="greenhouse-plant_service-1"} | logfmt | level="ERROR", line="58" +```{{copy}} + +This query will return all the logs from the `greenhouse-plant_service-1`{{copy}} container that have a `level`{{copy}} attribute of `ERROR`{{copy}} and a `line`{{copy}} attribute of `58`{{copy}}. + +LogQL also supports metrics queries. Metrics are useful for abstracting the raw log data aggregating attributes into numeric values. This allows you to utilise more visualization options in Grafana as well as generate alerts on your logs. + +For example, you can use a metric query to count the number of logs per second that have a specific attribute: + +```bash +sum(rate({container="greenhouse-plant_service-1"} | logfmt | level="ERROR" [$__auto])) +```{{copy}} + +Another example is to get the top 10 services producing the highest rate of errors: + +```bash +topk(10,sum(rate({level="error"} | logfmt [5m])) by (service_name)) +```{{copy}} + +> **Note:** +> `service_name`{{copy}} is a label created by Loki when no service name is provided in the log line. It will use the container name as the service name. A list of all labels can be found in [Labels](https://grafana.com/docs/loki/latest/get-started/labels/#default-labels-for-all-users). + +Finally, lets take a look at the total log throughput of each container in our production environment: + +```bash +sum by (service_name) (rate({env="production"} | logfmt [$__auto])) +```{{copy}} + +This is made possible by the `service_name`{{copy}} label and the `env`{{copy}} label that we have added to our log lines. diff --git a/workshops/course-tracker-test/step6.md b/workshops/course-tracker-test/step6.md new file mode 100644 index 0000000..05258c4 --- /dev/null +++ b/workshops/course-tracker-test/step6.md @@ -0,0 +1,80 @@ +# A look under the hood + +At this point you will have a running Loki Stack and a sample application generating logs. You have also queried Loki using Grafana Logs Drilldown and Grafana Explore. +In this next section we will take a look under the hood to understand how the Loki stack has been configured to collect logs, the Loki configuration file, and how the Loki datasource has been configured in Grafana. + +## Grafana Alloy configuration + +Grafana Alloy is collecting logs from all the docker containers and forwarding them to Loki. +It needs a configuration file to know which logs to collect and where to forward them to. Within the `loki-fundamentals`{{copy}} directory, you will find a file called `config.alloy`{{copy}}: + +```alloy +// This component is responsible for disovering new containers within the docker environment +discovery.docker "getting_started" { + host = "unix:///var/run/docker.sock" + refresh_interval = "5s" +} + +// This component is responsible for relabeling the discovered containers +discovery.relabel "getting_started" { + targets = [] + + rule { + source_labels = ["__meta_docker_container_name"] + regex = "/(.*)" + target_label = "container" + } +} + +// This component is responsible for collecting logs from the discovered containers +loki.source.docker "getting_started" { + host = "unix:///var/run/docker.sock" + targets = discovery.docker.getting_started.targets + forward_to = [loki.process.getting_started.receiver] + relabel_rules = discovery.relabel.getting_started.rules + refresh_interval = "5s" +} + +// This component is responsible for processing the logs (In this case adding static labels) +loki.process "getting_started" { + stage.static_labels { + values = { + env = "production", + } +} + forward_to = [loki.write.getting_started.receiver] +} + +// This component is responsible for writing the logs to Loki +loki.write "getting_started" { + endpoint { + url = "http://loki:3100/loki/api/v1/push" + } +} + +// Enables the ability to view logs in the Alloy UI in realtime +livedebugging { + enabled = true +} +```{{copy}} + +This configuration file can be viewed visually via the Alloy UI at [http://localhost:12345/graph]({{TRAFFIC_HOST1_12345}}/graph). + +![Alloy UI](https://grafana.com/media/docs/loki/getting-started-alloy-ui.png) + +In this view you can see the components of the Alloy configuration file and how they are connected: + +- **discovery.docker**: This component queries the metadata of the docker enviroment via the docker socket and discovers new containers, aswell as providing metdata about the containers. + +- **discovery.relabel**: This component converts a metadata (`__meta_docker_container_name`{{copy}}) label into a Loki label (`container`{{copy}}). + +- **loki.source.docker**: This component collects logs from the discovered containers and forwards them to the next component. It requests the metadata from the `discovery.docker`{{copy}} component and applies the relabeling rules from the `discovery.relabel`{{copy}} component. + +- **loki.process**: This component provides stages for log transformation and extraction. In this case it adds a static label `env=production`{{copy}} to all logs. + +- **loki.write**: This component writes the logs to Loki. It forwards the logs to the Loki endpoint `http://loki:3100/loki/api/v1/push`{{copy}}. + +## View Logs in realtime + +Grafana Alloy provides inbuilt realtime log viewer. This allows you to view current log entries and how they are being transformed via specific components of the pipeline. +To view live debugging mode open a browser tab and navigate to: [http://localhost:12345/debug/loki.process.getting_started]({{TRAFFIC_HOST1_12345}}/debug/loki.process.getting_started). diff --git a/workshops/course-tracker-test/step7.md b/workshops/course-tracker-test/step7.md new file mode 100644 index 0000000..a0b7f35 --- /dev/null +++ b/workshops/course-tracker-test/step7.md @@ -0,0 +1,89 @@ +# Loki Configuration + +Grafana Loki requires a configuration file to define how it should run. Within the `loki-fundamentals`{{copy}} directory, you will find a file called `loki-config.yaml`{{copy}}: + +```yaml +auth_enabled: false + +server: + http_listen_port: 3100 + grpc_listen_port: 9096 + log_level: info + grpc_server_max_concurrent_streams: 1000 + +common: + instance_addr: 127.0.0.1 + path_prefix: /tmp/loki + storage: + filesystem: + chunks_directory: /tmp/loki/chunks + rules_directory: /tmp/loki/rules + replication_factor: 1 + ring: + kvstore: + store: inmemory + +query_range: + results_cache: + cache: + embedded_cache: + enabled: true + max_size_mb: 100 + +limits_config: + metric_aggregation_enabled: true + allow_structured_metadata: true + volume_enabled: true + retention_period: 24h # 24h + +schema_config: + configs: + - from: 2020-10-24 + store: tsdb + object_store: filesystem + schema: v13 + index: + prefix: index_ + period: 24h + +pattern_ingester: + enabled: true + metric_aggregation: + loki_address: localhost:3100 + +ruler: + enable_alertmanager_discovery: true + enable_api: true + +frontend: + encoding: protobuf + +compactor: + working_directory: /tmp/loki/retention + delete_request_store: filesystem + retention_enabled: true +```{{copy}} + +To summarize the configuration file: + +- **auth_enabled**: This is set to false, meaning Loki does not need a [tenant ID](https://grafana.com/docs/loki/latest/operations/multi-tenancy/) for ingest or query. + +- **server**: Defines the ports Loki listens on, the log level, and the maximum number of concurrent gRPC streams. + +- **common**: Defines the common configuration for Loki. This includes the instance address, storage configuration, replication factor, and ring configuration. + +- **query_range**: This is defined to tell Loki to use inbuilt caching for query results. In production environments of Loki this is handled by a seperate cache service such as memcached. + +- **limits_config**: Defines the global limits for all Loki tenants. This includes enabling specific features such as metric aggregation and structured metadata. Limits can be defined on a per tenant basis, however this is considered an advanced configuration and for most use cases the global limits are sufficient. + +- **schema_config**: Defines the schema configuration for Loki. This includes the schema version, the object store, and the index configuration. + +- **pattern_ingester**: Enables pattern ingesters which are used to discover log patterns. Mostly used by Grafana Logs Drilldown. + +- **ruler**: Enables the ruler component of Loki. This is used to create alerts based on log queries. + +- **frontend**: Defines the encoding format for the frontend. In this case it is set to `protobuf`{{copy}}. + +- **compactor**: Defines the compactor configuration. Used to compact the index and mange chunk retention. + +The above configuration file is a basic configuration file for Loki. For more advanced configuration options, refer to the [Loki Configuration](https://grafana.com/docs/loki/latest/configuration/) documentation. diff --git a/workshops/course-tracker-test/step8.md b/workshops/course-tracker-test/step8.md new file mode 100644 index 0000000..c4fe40d --- /dev/null +++ b/workshops/course-tracker-test/step8.md @@ -0,0 +1,47 @@ +# Grafana Loki Datasource + +The final piece of the puzzle is the Grafana Loki datasource. This is used by Grafana to connect to Loki and query the logs. Grafana has multiple ways to define a datasource; + +- **Direct**: This is where you define the datasource in the Grafana UI. + +- **Provisioning**: This is where you define the datasource in a configuration file and have Grafana automatically create the datasource. + +- **API**: This is where you use the Grafana API to create the datasource. + +In this case we are using the provisioning method. Instead of mounting the Grafana configuration directory, we have defined the datasource in the `docker-compose.yml`{{copy}} file: + +```yaml + grafana: + image: grafana/grafana:latest + environment: + - GF_FEATURE_TOGGLES_ENABLE=grafanaManagedRecordingRules + - GF_AUTH_ANONYMOUS_ORG_ROLE=Admin + - GF_AUTH_ANONYMOUS_ENABLED=true + - GF_AUTH_BASIC_ENABLED=false + ports: + - 3000:3000/tcp + entrypoint: + - sh + - -euc + - | + mkdir -p /etc/grafana/provisioning/datasources + cat < /etc/grafana/provisioning/datasources/ds.yaml + apiVersion: 1 + datasources: + - name: Loki + type: loki + access: proxy + orgId: 1 + url: 'http://loki:3100' + basicAuth: false + isDefault: true + version: 1 + editable: true + EOF + /run.sh + networks: + - loki +```{{copy}} + +Within the entrypoint section of the `docker-compose.yml`{{copy}} file, we have defined a file called `run.sh`{{copy}} this runs on startup and creates the datasource configuration file `ds.yaml`{{copy}} in the Grafana provisioning directory. +This file defines the Loki datasource and tells Grafana to use it. Since Loki is running in the same Docker network as Grafana, we can use the service name `loki`{{copy}} as the URL. diff --git a/workshops/course-tracker-test/update.sh b/workshops/course-tracker-test/update.sh deleted file mode 100644 index 7b17011..0000000 --- a/workshops/course-tracker-test/update.sh +++ /dev/null @@ -1,60 +0,0 @@ -#!/bin/bash - -# Define variables -BIN_DIR="/usr/local/bin" -SERVICE_NAME="course-monitor" -BINARY_NAME="alloy-linux-amd64" -CONFIG_NAME="config.alloy" -DOWNLOAD_URL="https://github.com/grafana/alloy/releases/download/v1.6.1/alloy-linux-amd64.zip" -CONFIG_URL="https://raw.githubusercontent.com/grafana/killercoda/refs/heads/staging/tools/course-tracker/config.alloy" -SERVICE_FILE="/etc/systemd/system/${SERVICE_NAME}.service" - -# Create a temporary directory for the download -TMP_DIR=$(mktemp -d) -cd "$TMP_DIR" || exit 1 - -# Download and unzip Alloy -echo "Downloading Alloy..." -wget -q "$DOWNLOAD_URL" -O alloy.zip -unzip -q alloy.zip - -# Move the binary to /usr/local/bin and make it executable -echo "Installing Alloy..." -sudo mv "$BINARY_NAME" "$BIN_DIR/$BINARY_NAME" -sudo chmod +x "$BIN_DIR/$BINARY_NAME" - -# Download the configuration file -echo "Downloading configuration..." -sudo wget -q "$CONFIG_URL" -O "/etc/$CONFIG_NAME" - -# Create the systemd service -echo "Creating systemd service..." -sudo bash -c "cat < $SERVICE_FILE -[Unit] -Description=Course Monitor Service -After=network.target - -[Service] -ExecStart=$BIN_DIR/$BINARY_NAME run /etc/$CONFIG_NAME -Restart=always -User=root -WorkingDirectory=$BIN_DIR -StandardOutput=journal -StandardError=journal -LimitNOFILE=65536 -Environment=VM_UUID=$(cat /sys/class/dmi/id/product_uuid) -Environment=COURSE=course-tracker-test - -[Install] -WantedBy=multi-user.target -EOF" - - -# Reload systemd, enable and start the service -echo "Enabling and starting the service..." -sudo systemctl daemon-reload -sudo systemctl enable "$SERVICE_NAME" -sudo systemctl start "$SERVICE_NAME" -export PROMPT_COMMAND='history -a' - -echo "Service $SERVICE_NAME has been installed and started successfully."