Skip to content

Commit

Permalink
some updates with starting monitoring section.
Browse files Browse the repository at this point in the history
  • Loading branch information
vallard committed Oct 7, 2020
1 parent 2b3359f commit 54feb6d
Show file tree
Hide file tree
Showing 4 changed files with 171 additions and 2 deletions.
2 changes: 1 addition & 1 deletion segment03-install/createEKSctlCluster.sh
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
#!/bin/bash
set -x
time eksctl create cluster \
--name aug05 \
--name oct07 \
--version 1.17 \
--region us-west-2 \
--nodegroup-name standard-workers \
Expand Down
2 changes: 1 addition & 1 deletion segment07-integrations/serverless.yml
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@ provider:
region: us-west-2
environment:
## Define the name of your EKS cluster you want the lambda function to be able to access
CLUSTER: "aug05"
CLUSTER: "oct07"
## define the role that this lambda function will run under. This role should have access to
## be able to run kubectl commands.
role: arn:aws:iam::188966951897:role/kubeLambda
Expand Down
107 changes: 107 additions & 0 deletions segment08-monitoring/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,107 @@
# Cluster Monitoring

We can use [Prometheus](https://prometheus.io) and [Grafana](https://grafana.com/) for monitoring our cluster.

## Metrics Server

In [Segement 06](../segment06-admin/README.md) we installed the metrics server. Be sure this is done.

Check it is with:

```
kubectl get --raw /metrics
```


## Prometheus Operator

You can install the operator by cloning the [Prometheus Operator](https://github.com/prometheus-operator/kube-prometheus) repository with:

```
git clone https://github.com/prometheus-operator/kube-prometheus
```

Find the appropriate release for your version of Kubernetes in the table. For example, if you were using Kubernetes 1.17 (run `kubectl version` to see what you are running) you would see the [README](https://github.com/prometheus-operator/kube-prometheus/blob/master/README.md) shows I should be running `release-0.4`. So to install we run:

```
cd kube-prometheus
git branch -a
```
Here we see all the branch names. To switch to the release branch run:

```
git checkout remotes/origin/release-0.4
```

Now we can install the operator with:

```
kubectl create -f manifests/setup
```

You should then be able to see custom resources, `servicemonitors` by running:

```
kubectl get crd
```
And see there is a `servicemonitors.monitoring.coreos.com` custom resource definition.

Once that is defined you can install the rest of the monitoring components:

```
kubectl create -f manifiests/
```

You'll be able to see all the resources defined in the `monitoring` namespace with:

```
kubectl get pods -n monitoring
```

Output looks as follows:

```
NAME READY STATUS RESTARTS AGE
alertmanager-main-0 2/2 Running 0 3m15s
alertmanager-main-1 2/2 Running 0 3m15s
alertmanager-main-2 2/2 Running 0 3m15s
grafana-58dc7468d7-vvcnc 1/1 Running 0 3m12s
kube-state-metrics-765c7c7f95-kxddc 3/3 Running 0 3m12s
node-exporter-cnhm6 2/2 Running 0 2m15s
node-exporter-vnh9r 2/2 Running 0 3m13s
prometheus-adapter-5cd5798d96-j8xnn 1/1 Running 0 3m13s
prometheus-k8s-0 3/3 Running 1 3m13s
prometheus-k8s-1 3/3 Running 1 3m13s
prometheus-operator-5f75d76f9f-n9krn 1/1 Running 0 7m2s
```

### Ingress Rules

We now have three dashboards exposed to us, but they are, of course secured behind our firewall. We can access them with:

```
kubectl --namespace monitoring port-forward svc/prometheus-k8s 9090
kubectl --namespace monitoring port-forward svc/grafana 3000
kubectl --namespace monitoring port-forward svc/alertmanager-main 9093
```

Connecting then to `localhost:9090` would allow us to connect to Prometheus:

![prometheus](../images/mon01.png)

We can expose these with an ingress rule as well. You probably wouldn't want to expose your cluster like this to the outside world, but we will do this to show how to access them with our ingress controller that we created. You should edit the `monitoring-ingress-rules.yaml` file and use your own domain name. Once done you can run:

```
kubectl apply -f monitoring-ingress-rules.yaml
```

Now we can access all of these at the folowing domains:

* [grafana.k8s.castlerock.ai](https://grafana.k8s.castlerock.ai)
* [alertmanager.k8s.castlerock.ai](https://alertmanager.k8s.castlerock.ai)
* [prometheus.k8s.castlerock.ai](https://prometheus.k8s.castlerock.ai)





62 changes: 62 additions & 0 deletions segment08-monitoring/monitoring-ingress-rules.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,62 @@
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
cert-manager.io/cluster-issuer: letsencrypt-prod
kubernetes.io/ingress.class: nginx
name: prometheus-k8s
namespace: monitoring
spec:
tls:
- hosts:
- prometheus.k8s.castlerock.ai
secretName: prometheus-tls-cert
rules:
- host: prometheus.k8s.castlerock.ai
http:
paths:
- backend:
serviceName: prometheus-k8s
servicePort: 9090
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
cert-manager.io/cluster-issuer: letsencrypt-prod
kubernetes.io/ingress.class: nginx
name: grafana
namespace: monitoring
spec:
tls:
- hosts:
- grafana.k8s.castlerock.ai
secretName: grafana-tls-cert
rules:
- host: grafana.k8s.castlerock.ai
http:
paths:
- backend:
serviceName: grafana
servicePort: 3000
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
cert-manager.io/cluster-issuer: letsencrypt-prod
kubernetes.io/ingress.class: nginx
name: alertmanager
namespace: monitoring
spec:
tls:
- hosts:
- alertmanager.k8s.castlerock.ai
secretName: alertmanager-tls-cert
rules:
- host: alertmanager.k8s.castlerock.ai
http:
paths:
- backend:
serviceName: alertmanager-main
servicePort: 9093

0 comments on commit 54feb6d

Please sign in to comment.