title | menuTitle | weight | description | killercoda | ||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|
Kubernetes Monitoring with Loki |
Kubernetes Monitoring with Loki |
300 |
Learn how to collect and store logs from your Kubernetes cluster using Loki. |
|
One of the primary use cases for Loki is to collect and store logs from your Kubernetes cluster. These logs fall into three categories:
- Pod logs: Logs generated by containers otherwise known as logs running in your cluster.
- Kubernetes Events: Logs generated by the Kubernetes API server.
- Node logs: Logs generated by the nodes in your cluster.
{{< figure max-width="75%" src="/media/docs/loki/loki-k8s-logs.png" caption="Scraping Kubernetes Logs" alt="Scraping Kubernetes Logs" >}}
In this tutorial, we will deploy Loki and the Kubernetes Monitoring Helm chart to collect two of these log types: Pod logs and Kubernetes events. We will also deploy Grafana to visualize these logs.
Before you begin, here are some things you should know:
- Loki: Loki can run in a single binary mode or as a distributed system. In this tutorial, we will deploy Loki as a single binary otherwise known as monolithic mode. Loki can be vertically scaled in this mode depending on the amount of logs you are collecting. It is recommended to run Loki in a distributed/microservice mode for production use cases to monitor high volumes of logs.
- Deployment: We will deploy Loki, Grafana and Alloy (As part of the Kubernetes Monitoring Helm) in the
meta
namespace of your Kubernetes cluster. Make sure you have the necessary permissions to create resources in this namespace. These pods will also require resources to run so consider the amount of capacity your nodes have available. It also possible to just deploy the Kubernetes monitoring helm (since it has a minimal resource footprint) within your cluster and write logs to an external Loki instance or Grafana Cloud. - Storage: In this tutorial, Loki will use the default object storage backend provided in the Loki Helm; MinIO. You should migrate to a more production-ready storage backend like S3, GCS, Azure Blob Storage or a MinIO Cluster for production use cases.
Before you begin, you will need the following:
- A Kubernetes cluster running version
1.23
or later. - kubectl installed on your local machine.
- helm installed on your local machine.
{{< admonition type="tip" >}} Alternatively, you can try out this example in our interactive learning environment: Kubernetes Monitoring with Loki.
It's a fully configured environment with all the dependencies already installed.
Provide feedback, report bugs, and raise issues in the Grafana Killercoda repository. {{< /admonition >}}
The K8s Monitoring Helm chart will monitor two namespaces: meta
and prod
:
meta
namespace: This namespace will be used to deploy Loki, Grafana, and Alloy.prod
namespace: This namespace will be used to deploy the sample application that will generate logs.
Create the meta
and prod
namespaces by running the following commands:
kubectl create namespace meta && kubectl create namespace prod
All three helm charts (Loki, Grafana, and the Kubernetes Monitoring Helm) are available in the Grafana Helm repository. Add the Grafana Helm repository by running the following command:
helm repo add grafana https://grafana.github.io/helm-charts && helm repo update
It's recommended to also run helm repo update
to ensure you have the latest version of the charts.
Clone the tutorial repository by running the following command:
git clone https://github.com/grafana/alloy-scenarios.git && cd alloy-scenarios/k8s-logs
As well as cloning the repository, we have also changed directories to alloy-scenarios/k8s-logs
. The rest of this tutorial assumes you are in this directory.
Grafana Loki will be used to store our collected logs. In this tutorial we will deploy Loki with a minimal footprint and use the default storage backend provided by the Loki Helm (MinIO).
Note: Due to the resource constraints of the Kubernetes cluster running in the playground, we are deploying Loki using a custom values file. This values file reduces the resource requirements of Loki. This turns off features such as; cache, Loki Canary, and runs Loki with limited resources. This can take up to 1 minute to complete.
To deploy Loki run the following command:
helm install --values loki-values.yml loki grafana/loki -n meta
helm install --values killercoda/loki-values.yml loki grafana/loki -n meta
This command will deploy Loki in the meta
namespace. The command also includes a values
file that specifies the configuration for Loki. For more details on how to configure the Loki Helm refer to the Loki Helm documentation.
Next we will deploy Grafana to the meta namespace. Grafana will be used to visualize the logs stored in Loki. To deploy Grafana run the following command:
helm install --values grafana-values.yml grafana grafana/grafana --namespace meta
As before the command also includes a values
file that specifies the configuration for Grafana. There are two important configurations attributes to take note of:
-
adminUser
&adminPassword
: These are the credentials you will use to log in to Grafana. The values areadmin
andadminadminadmin
respectively. The recommended practice is to either use a Kubernetes secret or allow Grafana to generate a password for you. For more details on how to configure the Grafana Helm refer to the Grafana Helm documentation. -
datasources
: This section of the configuration allows for the definition of data sources that Grafana will use. In this tutorial, we will define a data source for Loki. The data source is defined as follows:datasources: datasources.yaml: apiVersion: 1 datasources: - name: Loki type: loki access: proxy orgId: 1 url: http://loki-gateway.meta.svc.cluster.local:80 basicAuth: false isDefault: false version: 1 editable: false
This configuration defines a data source named Loki
that Grafana will use to query logs stored in Loki. The url
attribute specifies the URL of the Loki gateway. The Loki gateway is a service that sits in front of the Loki API and provides a single endpoint for ingesting and querying logs. The URL is in the format http://loki-gateway.meta.svc.cluster.local:80
. The loki-gateway
service is created by the Loki Helm chart and is used to query logs stored in Loki. If you choose to deploy Loki in a different namespace or with a different name, you will need to update the url
attribute accordingly.
The Kubernetes Monitoring Helm chart is used for gathering, scraping, and forwarding Kubernetes telemetry data to a Grafana Stack. This includes the ability to collect; metrics, logs, traces & continuous profiling data. The scope of this tutorial is to deploy the Kubernetes Monitoring Helm chart to collect pod logs and Kubernetes events.
To deploy the Kubernetes Monitoring Helm chart run the following command:
helm install --values ./k8s-monitoring-values.yml k8s grafana/k8s-monitoring -n meta
Within the configuration file k8s-monitoring-values.yml
we have defined the following:
---
cluster:
name: meta-monitoring-tutorial
destinations:
- name: loki
type: loki
url: http://loki-gateway.meta.svc.cluster.local/loki/api/v1/push
clusterEvents:
enabled: true
collector: alloy-logs
namespaces:
- meta
- prod
nodeLogs:
enabled: false
podLogs:
enabled: true
gatherMethod: kubernetesApi
collector: alloy-logs
namespaces:
- meta
- prod
# Collectors
alloy-singleton:
enabled: false
alloy-metrics:
enabled: false
alloy-logs:
enabled: true
alloy-profiles:
enabled: false
alloy-receiver:
enabled: false
To break down the configuration file:
- Define the cluster name as
meta-monitoring-tutorial
. This a static label that will be attached to all logs collected by the Kubernetes Monitoring Helm chart. - Define a destination named
loki
that will be used to forward logs to Loki. Theurl
attribute specifies the URL of the Loki gateway. If you choose to deploy Loki in a different namespace or in a different location entirley, you will need to update theurl
attribute accordingly. - Enable the collection of cluster events and pod logs:
collector
: specifies which collector to use to collect logs. In this case, we are using thealloy-logs
collector.namespaces
: specifies the namespaces to collect logs from. In this case, we are collecting logs from themeta
andprod
namespaces.
- Disable the collection of node logs for the purpose of this tutorial as it requires the mounting of
/var/log/journal
. This is out of scope for this tutorial. - Lastly, define the role of the collector. The Kubernetes Monitoring Helm chart will deploy only what you need and nothing more. In this case, we are telling the Helm chart to only deploy Alloy with the capability to collect logs. If you need to collect K8s metrics, traces, or continuous profiling data, you can enable the respective collectors.
To access Grafana, you will need to port-forward the Grafana service to your local machine. To do this, run the following command:
export POD_NAME=$(kubectl get pods --namespace meta -l "app.kubernetes.io/name=grafana,app.kubernetes.io/instance=grafana" -o jsonpath="{.items[0].metadata.name}") && \
kubectl --namespace meta port-forward $POD_NAME 3000 --address 0.0.0.0
{{< admonition type="tip" >}}
This will make your terminal unusable until you stop the port-forwarding process. To do this, press Ctrl + C
.
{{< /admonition >}}
This command will port-forward the Grafana service to your local machine on port 3000
. You can access Grafana by navigating to http://localhost:3000 in your browser. The default credentials are admin
and adminadminadmin
. One of the first places you should visit is Explore Logs which will provide a no-code view of the logs being stored in Loki:
http://localhost:3000/a/grafana-lokiexplore-app
{{< figure max-width="100%" src="/media/docs/loki/k8s-logs-explore-logs.png" caption="Explore Logs view of K8s logs" alt="Explore Logs view of K8s logs" >}}
The Kubernetes Monitoring Helm chart deploys Grafana Alloy to collect and forward telemetry data from the Kubernetes cluster. The Helm is designed to abstract you from creating an Alloy configuration file. However if you would like to understand the pipeline you can view the Alloy UI. To access the Alloy UI, you will need to port-forward the Alloy service to your local machine. To do this, run the following command:
export POD_NAME=$(kubectl get pods --namespace meta -l "app.kubernetes.io/name=alloy-logs,app.kubernetes.io/instance=k8s" -o jsonpath="{.items[0].metadata.name}") && \
kubectl --namespace meta port-forward $POD_NAME 12345 --address 0.0.0.0
{{< admonition type="tip" >}}
This will make your terminal unusable until you stop the port-forwarding process. To do this, press Ctrl + C
.
{{< /admonition >}}
This command will port-forward the Alloy service to your local machine on port 12345
. You can access the Alloy UI by navigating to http://localhost:12345 in your browser.
{{< figure max-width="100%" src="/media/docs/loki/k8s-logs-alloy-ui.png" caption="Grafana Alloy UI" alt="Grafana Alloy UI" >}}
Lastly, lets deploy a sample application to the prod
namespace that will generate some logs. To deploy the sample application run the following command:
helm install tempo grafana/tempo-distributed -n prod
This will deploy a default version of Grafana Tempo to the prod
namespace. Tempo is a distributed tracing backend that is used to store and query traces. Normally Tempo would sit alongside Loki and Grafana in the meta
namespace, but for the purpose of this tutorial, we will pretend this is the primary application generating logs.
Once deployed lets expose Grafana once more:
export POD_NAME=$(kubectl get pods --namespace meta -l "app.kubernetes.io/name=grafana,app.kubernetes.io/instance=grafana" -o jsonpath="{.items[0].metadata.name}") && \
kubectl --namespace meta port-forward $POD_NAME 3000 --address 0.0.0.0
and navigate to http://localhost:3000/a/grafana-lokiexplore-app to view Grafana Tempo logs.
{{< figure max-width="100%" src="/media/docs/loki/k8s-logs-tempo.png" caption="Label view of Tempo logs" alt="Label view of Tempo logs" >}}
In this tutorial, you learned how to deploy Loki, Grafana, and the Kubernetes Monitoring Helm chart to collect and store logs from a Kubernetes cluster. We have deployed a minimal test version of each of these helm charts to demonstrate how quickly you can get started with Loki. It now worth exploring each of these helm charts in more detail to understand how to scale them to meet your production needs: