Skip to content

Commit

Permalink
finally got the right magic for fluent to forward logs.
Browse files Browse the repository at this point in the history
  • Loading branch information
vallard committed Jul 23, 2022
1 parent 16e5462 commit e806374
Show file tree
Hide file tree
Showing 3 changed files with 104 additions and 2 deletions.
49 changes: 47 additions & 2 deletions m07-fek/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,6 +9,51 @@ Logging information from your applications to search, verify, and index on is a
* ElasticSearch (OpenSearch) - ElasticSearch is open source and the company behind them (Elastic) seemed to have issues with Amazon. So Amazon forked it, and now offers OpenSearch. Good or bad, this is what we'll use.
* Kibana - This is our dashboard for viewing the logs and keeping them sorted.

## OpenSearch
## Installation and Configuration

### OpenSearch

The first step is to install an OpenSearch Cluster. We'll keep it small to start out with but notice how big these things can get, so watch it. The other thing to watch for is that the logs constantly fill up! We have implemented a culler in the past that culls these logs and clears out everything.

The OpenSearch was installed as a terragrunt module before the course started.

But now we'd like to access OpenSearch. As it is going in through our private subnet we have no access to it from the outside. However, we can make a password protect ingress rule that can allow us access to the kibana service.

To do this we create a service with an [ExternalName](https://kubernetes.io/docs/concepts/services-networking/service/#externalname) that maps to OpenSearch.

Then we create an ingress rule and now we have access to the kibana dashboard.

```
kubectl apply -f kibana-proxy.yaml
```

This lets us login with our standard username/password and now we can visit:

[https://kibana.k8s.castlerock.ai/_dashboards](https://kibana.k8s.castlerock.ai/_dashboards) and see our OpenSearch cluster

![open search dashboard](../images/mo/fek01.png)

There's not much to see in here right now because there is



### Fluentd

```
helm repo add fluent https://fluent.github.io/helm-charts
helm repo update
```

We will be adding our own values yaml to install this so that it forwards to our Elasticsearch cluster.

You can see all the values that can be configured with:

```
helm show values fluent/fluentd
```

```
kubectl create ns fluentd
helm upgrade --install -n fluentd fluentd -f values.yaml fluent/fluentd
```

The first step is to install an OpenSearch Cluster. We'll keep it small to start out with but notice how big these things can get, so watch it. The other thing to watch for is that the logs constantly fill up! We have implemented a culler in the past that culls these logs and clears out everything
39 changes: 39 additions & 0 deletions m07-fek/kibana-proxy.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,39 @@
---
apiVersion: v1
kind: Service
metadata:
labels:
run: kibana-proxy
name: kibana-proxy
namespace: monitoring
spec:
type: ExternalName
externalName: vpc-opensearch-stage-woesdamvqbli5siagd7guw7ubi.us-west-2.es.amazonaws.com
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
cert-manager.io/cluster-issuer: letsencrypt-prod
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/auth-type: basic
nginx.ingress.kubernetes.io/auth-secret: htpasswd
nginx.ingress.kubernetes.io/auth-realm: 'You may ask yourself: How did I get here?'
name: kibana-proxy
namespace: monitoring
spec:
tls:
- hosts:
- kibana.k8s.castlerock.ai
secretName: kibana-tls-cert
rules:
- host: kibana.k8s.castlerock.ai
http:
paths:
- path: "/"
pathType: Prefix
backend:
service:
name: kibana-proxy
port:
number: 80
18 changes: 18 additions & 0 deletions m07-fek/values.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,18 @@
plugins:
- fluent-plugin-opensearch

fileConfigs:
04_outputs.conf: |-
<label @OUTPUT>
<match ** >
@type opensearch
host vpc-opensearch-stage-woesdamvqbli5siagd7guw7ubi.us-west-2.es.amazonaws.com
port 443
ssl_verify false
logstash_format true
ssl_version TLSv1_2
logstash_prefix fluentd
include_timestamp true
scheme https
</match>
</label>

0 comments on commit e806374

Please sign in to comment.