diff --git a/.nojekyll b/.nojekyll new file mode 100644 index 0000000..e69de29 diff --git a/404.html b/404.html new file mode 100644 index 0000000..a3e91f3 --- /dev/null +++ b/404.html @@ -0,0 +1,4 @@ +Engineering Blog +

404

Page Not Found

Sorry, this page does not exist.
You can head back to the homepage.

© 2019 - 2024 | Built with ♥️ by Pascal Grundmeier - DevOps engineering at scale.
\ No newline at end of file diff --git a/about/index.html b/about/index.html new file mode 100644 index 0000000..995f02a --- /dev/null +++ b/about/index.html @@ -0,0 +1,14 @@ +About me · Engineering Blog +

About me

Hello, my name is Pascal. I’m currently working as a DevOps Engineer at Dr. Klein. As a DevOps Engineer, I have deep knowledge in distributed systems, network and system engineering and security. +I’m currently studying Applied Computer Science since October 2021 at the Fernuniversität Hagen, while working full time.

I’m reading a lot about of DevOps, Site Reliability Engineering, distributed systems and automation. I am very curious so I’m always experimenting and trying out different projects and enjoy to automate things.

If I had to describe myself with a three words, I’d reply I am inquisitive, engaged and structured.

Skills + +Link to heading

  • Windows Server and Linux (RHEL, Debian/Ubuntu and FlatCar) systems engineering.
  • Docker (Building images, service administration etc.)
  • Kubernetes administration and usage.
  • Amazon Web Services knowledge for services like CloudTrail, CloudWatch, EC2, ECS, ElastiCache, ELB, Lambda, RDS and VPC of course.
  • Coding with Python 3, Golang, Powershell, Bash and some Java and C. Especially Python is the Swiss army knife of all programming languages.
  • Knowledge in common network protocols (DNS, HTTP(S), IPv4, IPv6, TCP, TLS, UDP) and network administration (firewalls, routing and load balancing).
  • Knowledge with continuous integration and continuous deployment (i. e. with Jenkins or Github Automation).
  • Configuration management with Ansible, Terraform and Powershell Desired State Configuration. I really enjoy running my Linux servers with Ansible. For cloud orchestration I prefer Terraform, because it is way faster and easier.
  • Monitoring systems and services with Icinga 2 or Prometheus and Victoriametrics.
  • Knowledge in webservers like Nginx or Caddy and common use cases like reverse proxying, health checks or TLS termination.
  • I’m also TÜV certified IT Architecture and Technology Professional.
  • Recently I finished my Bachelor’s degree and I am now B. Sc. in Business Informatics. Since October 2021 I’m stuying Applied Computer Science.
  • Knowledge in CNCF project like Argo, CoreDNS, Keda, Falco, Helm, Cert-Manager, Carvel ytt, ko, and Kubescape.

Projects + +Link to heading

  • I’m running a Raspberry Pi 4 at home which is completely managed with Ansible. Of course I can still login by SSH but I prefer using Ansible :-).
  • I created an own telegram bot with Python 3. I dockerized the bot later, as you can see in my Github repo.
  • For my university studies I created a simple dummy REST API to evaluate the scaling abilitys of different AWS technologys. I am going to write a blog post about this one.
  • The latest one is this blog of course, hosted by Github pages and created with Hugo.
  • As part of my Bachelor exam I created an entire DevOps automation platform in AWS to test and build cross plattform apps with Flutter. The entire infrastructure was created and maintained with Terraform. The configuration magic came from code files within a Github repo, so everything was easily changeable.
  • Running Kubernetes on AWS with kOps using only spot instances. Certificates are managed with Cert-Manager, manifests generated with ytt and GitOps is done by Argo.

Interests + +Link to heading

Now you already know quite a lot about my skills, but I also

  • Really enjoy riding my gravel bike around Brunswick.
  • Like to go on a hike with friends.
  • Enjoy playing chess online or with a friend
  • Play at Eintracht Braunschweig’s table football team or
  • Enjoy reading a book in my favourite chair with a cup of tea.

Got curious or do you have any questions? Feel free to contact me!

© 2019 - 2024 | Built with ♥️ by Pascal Grundmeier - DevOps engineering at scale.
\ No newline at end of file diff --git a/categories/index.html b/categories/index.html new file mode 100644 index 0000000..a5e6ae3 --- /dev/null +++ b/categories/index.html @@ -0,0 +1,4 @@ +Categories · Engineering Blog +

Categories

    © 2019 - 2024 | Built with ♥️ by Pascal Grundmeier - DevOps engineering at scale.
    \ No newline at end of file diff --git a/categories/index.xml b/categories/index.xml new file mode 100644 index 0000000..7529b90 --- /dev/null +++ b/categories/index.xml @@ -0,0 +1 @@ +Categories on Engineering Bloghttps://pgrunm.github.io/categories/Recent content in Categories on Engineering BlogHugoen-us \ No newline at end of file diff --git a/code/ytt/Taskfile.yaml b/code/ytt/Taskfile.yaml new file mode 100644 index 0000000..585c750 --- /dev/null +++ b/code/ytt/Taskfile.yaml @@ -0,0 +1,12 @@ +# https://taskfile.dev + +version: '3' + +tasks: + ytt: + desc: Renders the Kubernetes manifests for the blog post + cmds: + - ytt -f deployment -f values.yaml > deployment.autogen.yaml + sources: + - "deployment/*.yaml" + - "values.yaml" \ No newline at end of file diff --git a/code/ytt/deployment.autogen.yaml b/code/ytt/deployment.autogen.yaml new file mode 100644 index 0000000..cd0ea19 --- /dev/null +++ b/code/ytt/deployment.autogen.yaml @@ -0,0 +1,218 @@ +apiVersion: apps/v1 +kind: Deployment +metadata: + labels: + app: example-exporter + team: devops + name: example-exporter + namespace: dev +spec: + replicas: 1 + selector: + matchLabels: + app: example-exporter + strategy: + rollingUpdate: + maxSurge: 1 + maxUnavailable: 0 + type: RollingUpdate + template: + metadata: + labels: + app: example-exporter + annotations: + prometheus.io/scrape: "true" + prometheus.io/port: 9100 + spec: + containers: + image: ghcr.io/example-org/example-exporter:0.2 + env: + - name: DATABASE + value: dev.example.com + imagePullPolicy: Always + livenessProbe: null + failureThreshold: 3 + httpGet: + path: /metrics + port: 9100 + periodSeconds: 10 + name: example-exporter + ports: + - containerPort: 9100 + name: http + readinessProbe: + httpGet: + path: /metrics + port: 9100 + periodSeconds: 5 + resources: null + limits: + memory: 32Mi + cpu: 0.01 + requests: + memory: 16Mi + priorityClassName: low + restartPolicy: Always +--- +apiVersion: apps/v1 +kind: Deployment +metadata: + labels: + app: example-exporter + team: devops + name: example-exporter + namespace: qa +spec: + replicas: 1 + selector: + matchLabels: + app: example-exporter + strategy: + rollingUpdate: + maxSurge: 1 + maxUnavailable: 0 + type: RollingUpdate + template: + metadata: + labels: + app: example-exporter + annotations: + prometheus.io/scrape: "true" + prometheus.io/port: 9100 + spec: + containers: + image: ghcr.io/example-org/example-exporter:0.2 + env: + - name: DATABASE + value: qa.example.com + imagePullPolicy: Always + livenessProbe: null + failureThreshold: 3 + httpGet: + path: /metrics + port: 9100 + periodSeconds: 10 + name: example-exporter + ports: + - containerPort: 9100 + name: http + readinessProbe: + httpGet: + path: /metrics + port: 9100 + periodSeconds: 5 + resources: null + limits: + memory: 32Mi + cpu: 0.01 + requests: + memory: 16Mi + priorityClassName: low + restartPolicy: Always +--- +apiVersion: apps/v1 +kind: Deployment +metadata: + labels: + app: example-exporter + team: devops + name: example-exporter + namespace: prod +spec: + replicas: 1 + selector: + matchLabels: + app: example-exporter + strategy: + rollingUpdate: + maxSurge: 1 + maxUnavailable: 0 + type: RollingUpdate + template: + metadata: + labels: + app: example-exporter + annotations: + prometheus.io/scrape: "true" + prometheus.io/port: 9100 + spec: + containers: + image: ghcr.io/example-org/example-exporter:0.2 + env: + - name: DATABASE + value: prod.example.com + imagePullPolicy: Always + livenessProbe: null + failureThreshold: 3 + httpGet: + path: /metrics + port: 9100 + periodSeconds: 10 + name: example-exporter + ports: + - containerPort: 9100 + name: http + readinessProbe: + httpGet: + path: /metrics + port: 9100 + periodSeconds: 5 + resources: null + limits: + memory: 32Mi + cpu: 0.01 + requests: + memory: 16Mi + priorityClassName: low + restartPolicy: Always +--- +apiVersion: v1 +kind: Service +metadata: + labels: + app: example-exporter + team: devops + name: example-exporter + namespace: dev +spec: + ports: + - name: http + port: 9100 + protocol: TCP + targetPort: 9100 + selector: + app: example-exporter +--- +apiVersion: v1 +kind: Service +metadata: + labels: + app: example-exporter + team: devops + name: example-exporter + namespace: qa +spec: + ports: + - name: http + port: 9100 + protocol: TCP + targetPort: 9100 + selector: + app: example-exporter +--- +apiVersion: v1 +kind: Service +metadata: + labels: + app: example-exporter + team: devops + name: example-exporter + namespace: prod +spec: + ports: + - name: http + port: 9100 + protocol: TCP + targetPort: 9100 + selector: + app: example-exporter diff --git a/code/ytt/deployment/deployment.tmpl.yaml b/code/ytt/deployment/deployment.tmpl.yaml new file mode 100644 index 0000000..029ad8f --- /dev/null +++ b/code/ytt/deployment/deployment.tmpl.yaml @@ -0,0 +1,60 @@ +#@ load("@ytt:data", "data") + +#@ for item in data.values.stages: +--- +apiVersion: apps/v1 +kind: Deployment +metadata: + labels: + app: #@ data.values.app_name + team: #@ data.values.labels.team + name: #@ data.values.app_name + namespace: #@ item.namespace +spec: + replicas: #@ item.replicas + selector: + matchLabels: + app: #@ data.values.app_name + strategy: + rollingUpdate: + maxSurge: 1 + maxUnavailable: 0 + type: RollingUpdate + template: + metadata: + labels: + app: #@ data.values.app_name + annotations: + "prometheus.io/scrape": "true" + "prometheus.io/port": #@ data.values.metrics.port + spec: + containers: + image: #@ "ghcr.io/example-org/" + data.values.app_name + ":" + str(item.version) + env: + - name: DATABASE + value: #@ item.variables.database + imagePullPolicy: Always + livenessProbe: + failureThreshold: 3 + httpGet: + path: #@ data.values.metrics.path + port: #@ data.values.metrics.port + periodSeconds: 10 + name: #@ data.values.app_name + ports: + - containerPort: #@ data.values.metrics.port + name: http + readinessProbe: + httpGet: + path: #@ data.values.metrics.path + port: #@ data.values.metrics.port + periodSeconds: 5 + resources: + limits: + memory: #@ data.values.resources.mem_limit + cpu: #@ data.values.resources.cpu_limit + requests: + memory: #@ data.values.resources.mem_requests + priorityClassName: #@ data.values.prioritiy_class + restartPolicy: Always +#@ end \ No newline at end of file diff --git a/code/ytt/deployment/service.tmpl.yaml b/code/ytt/deployment/service.tmpl.yaml new file mode 100644 index 0000000..8d6caa3 --- /dev/null +++ b/code/ytt/deployment/service.tmpl.yaml @@ -0,0 +1,21 @@ +#@ load("@ytt:data", "data") + +#@ for item in data.values.stages: +--- +apiVersion: v1 +kind: Service +metadata: + labels: + app: #@ data.values.app_name + team: #@ data.values.labels.team + name: #@ data.values.app_name + namespace: #@ item.namespace +spec: + ports: + - name: http + port: 9100 + protocol: TCP + targetPort: 9100 + selector: + app: #@ data.values.app_name +#@ end \ No newline at end of file diff --git a/code/ytt/values.yaml b/code/ytt/values.yaml new file mode 100644 index 0000000..01ea39a --- /dev/null +++ b/code/ytt/values.yaml @@ -0,0 +1,33 @@ +#@data/values +--- +app_name: example-exporter +prioritiy_class: low +metrics: + scrape: true + port: 9100 + path: /metrics +labels: + team: devops +resources: + mem_limit: 32Mi + mem_requests: 16Mi + cpu_limit: 0.01 +stages: + - name: dev + namespace: dev + variables: + database: dev.example.com + replicas: 1 + version: 0.2 + - name: qa + namespace: qa + variables: + database: qa.example.com + replicas: 1 + version: 0.2 + - name: prod + namespace: prod + variables: + database: prod.example.com + replicas: 1 + version: 0.2 diff --git a/contact/index.html b/contact/index.html new file mode 100644 index 0000000..6ff9d36 --- /dev/null +++ b/contact/index.html @@ -0,0 +1,4 @@ +Contact · Engineering Blog +

    Contact

    Did you find an issue, want to contact me or just want to chat? Feel free to! You can find many ways to contact me on the front page.

    © 2019 - 2024 | Built with ♥️ by Pascal Grundmeier - DevOps engineering at scale.
    \ No newline at end of file diff --git a/css/coder-dark.min.f6534b0b446b75d9b6ad77a97d43ede2ddaeff1b6e2361fb7198d6f8fcb7f83f.css b/css/coder-dark.min.f6534b0b446b75d9b6ad77a97d43ede2ddaeff1b6e2361fb7198d6f8fcb7f83f.css new file mode 100644 index 0000000..bdc92c0 --- /dev/null +++ b/css/coder-dark.min.f6534b0b446b75d9b6ad77a97d43ede2ddaeff1b6e2361fb7198d6f8fcb7f83f.css @@ -0,0 +1 @@ +body.colorscheme-dark{color:#dadada;background-color:#212121}body.colorscheme-dark a{color:#42a5f5}body.colorscheme-dark h1,body.colorscheme-dark h2,body.colorscheme-dark h3,body.colorscheme-dark h4,body.colorscheme-dark h5,body.colorscheme-dark h6{color:#dadada}body.colorscheme-dark h1:hover .heading-link,body.colorscheme-dark h2:hover .heading-link,body.colorscheme-dark h3:hover .heading-link,body.colorscheme-dark h4:hover .heading-link,body.colorscheme-dark h5:hover .heading-link,body.colorscheme-dark h6:hover .heading-link{visibility:visible}body.colorscheme-dark h1 .heading-link,body.colorscheme-dark h2 .heading-link,body.colorscheme-dark h3 .heading-link,body.colorscheme-dark h4 .heading-link,body.colorscheme-dark h5 .heading-link,body.colorscheme-dark h6 .heading-link{color:#42a5f5;font-weight:inherit;text-decoration:none;font-size:80%;visibility:hidden}body.colorscheme-dark h1 .title-link,body.colorscheme-dark h2 .title-link,body.colorscheme-dark h3 .title-link,body.colorscheme-dark h4 .title-link,body.colorscheme-dark h5 .title-link,body.colorscheme-dark h6 .title-link{color:inherit;font-weight:inherit;text-decoration:none}body.colorscheme-dark blockquote{border-left:2px solid #424242}body.colorscheme-dark th,body.colorscheme-dark td{padding:1.6rem}body.colorscheme-dark table{border-collapse:collapse}body.colorscheme-dark table td,body.colorscheme-dark table th{border:2px solid #dadada}body.colorscheme-dark table tr:first-child th{border-top:0}body.colorscheme-dark table tr:last-child td{border-bottom:0}body.colorscheme-dark table tr td:first-child,body.colorscheme-dark table tr th:first-child{border-left:0}body.colorscheme-dark table tr td:last-child,body.colorscheme-dark table tr th:last-child{border-right:0}@media(prefers-color-scheme:dark){body.colorscheme-auto{color:#dadada;background-color:#212121}body.colorscheme-auto a{color:#42a5f5}body.colorscheme-auto h1,body.colorscheme-auto h2,body.colorscheme-auto h3,body.colorscheme-auto h4,body.colorscheme-auto h5,body.colorscheme-auto h6{color:#dadada}body.colorscheme-auto h1:hover .heading-link,body.colorscheme-auto h2:hover .heading-link,body.colorscheme-auto h3:hover .heading-link,body.colorscheme-auto h4:hover .heading-link,body.colorscheme-auto h5:hover .heading-link,body.colorscheme-auto h6:hover .heading-link{visibility:visible}body.colorscheme-auto h1 .heading-link,body.colorscheme-auto h2 .heading-link,body.colorscheme-auto h3 .heading-link,body.colorscheme-auto h4 .heading-link,body.colorscheme-auto h5 .heading-link,body.colorscheme-auto h6 .heading-link{color:#42a5f5;font-weight:inherit;text-decoration:none;font-size:80%;visibility:hidden}body.colorscheme-auto h1 .title-link,body.colorscheme-auto h2 .title-link,body.colorscheme-auto h3 .title-link,body.colorscheme-auto h4 .title-link,body.colorscheme-auto h5 .title-link,body.colorscheme-auto h6 .title-link{color:inherit;font-weight:inherit;text-decoration:none}body.colorscheme-auto blockquote{border-left:2px solid #424242}body.colorscheme-auto th,body.colorscheme-auto td{padding:1.6rem}body.colorscheme-auto table{border-collapse:collapse}body.colorscheme-auto table td,body.colorscheme-auto table th{border:2px solid #dadada}body.colorscheme-auto table tr:first-child th{border-top:0}body.colorscheme-auto table tr:last-child td{border-bottom:0}body.colorscheme-auto table tr td:first-child,body.colorscheme-auto table tr th:first-child{border-left:0}body.colorscheme-auto table tr td:last-child,body.colorscheme-auto table tr th:last-child{border-right:0}}body.colorscheme-dark .content .post .tags .tag{background-color:#424242}body.colorscheme-dark .content .post .tags .tag a{color:#dadada}body.colorscheme-dark .content .post .tags .tag a:active{color:#dadada}body.colorscheme-dark .content .list ul li .title{color:#dadada}body.colorscheme-dark .content .list ul li .title:hover,body.colorscheme-dark .content .list ul li .title:focus{color:#42a5f5}body.colorscheme-dark .content .centered .about ul li a{color:#dadada}body.colorscheme-dark .content .centered .about ul li a:hover,body.colorscheme-dark .content .centered .about ul li a:focus{color:#42a5f5}@media(prefers-color-scheme:dark){body.colorscheme-auto .content .post .tags .tag{background-color:#424242}body.colorscheme-auto .content .post .tags .tag a{color:#dadada}body.colorscheme-auto .content .post .tags .tag a:active{color:#dadada}body.colorscheme-auto .content .list ul li .title{color:#dadada}body.colorscheme-auto .content .list ul li .title:hover,body.colorscheme-auto .content .list ul li .title:focus{color:#42a5f5}body.colorscheme-auto .content .centered .about ul li a{color:#dadada}body.colorscheme-auto .content .centered .about ul li a:hover,body.colorscheme-auto .content .centered .about ul li a:focus{color:#42a5f5}}body.colorscheme-dark .notice .notice-title{border-bottom:1px solid #212121}@media(prefers-color-scheme:dark){body.colorscheme-auto .notice .notice-title{border-bottom:1px solid #212121}}body.colorscheme-dark .navigation a,body.colorscheme-dark .navigation span{color:#dadada}body.colorscheme-dark .navigation a:hover,body.colorscheme-dark .navigation a:focus{color:#42a5f5}@media only screen and (max-width:768px){body.colorscheme-dark .navigation .navigation-list{background-color:#212121;border-top:solid 2px #424242;border-bottom:solid 2px #424242}}@media only screen and (max-width:768px){body.colorscheme-dark .navigation .navigation-list .menu-separator{border-top:2px solid #dadada}}@media only screen and (max-width:768px){body.colorscheme-dark .navigation #menu-toggle:checked+label>i{color:#424242}}body.colorscheme-dark .navigation i{color:#dadada}body.colorscheme-dark .navigation i:hover,body.colorscheme-dark .navigation i:focus{color:#42a5f5}body.colorscheme-dark .navigation .menu-button i:hover,body.colorscheme-dark .navigation .menu-button i:focus{color:#dadada}@media(prefers-color-scheme:dark){body.colorscheme-auto .navigation a,body.colorscheme-auto .navigation span{color:#dadada}body.colorscheme-auto .navigation a:hover,body.colorscheme-auto .navigation a:focus{color:#42a5f5}}@media only screen and (prefers-color-scheme:dark) and (max-width:768px){body.colorscheme-auto .navigation .navigation-list{background-color:#212121;border-top:solid 2px #424242;border-bottom:solid 2px #424242}}@media only screen and (prefers-color-scheme:dark) and (max-width:768px){body.colorscheme-auto .navigation .navigation-list .menu-separator{border-top:2px solid #dadada}}@media only screen and (prefers-color-scheme:dark) and (max-width:768px){body.colorscheme-auto .navigation #menu-toggle:checked+label>i{color:#424242}}@media(prefers-color-scheme:dark){body.colorscheme-auto .navigation i{color:#dadada}body.colorscheme-auto .navigation i:hover,body.colorscheme-auto .navigation i:focus{color:#42a5f5}body.colorscheme-auto .navigation .menu-button i:hover,body.colorscheme-auto .navigation .menu-button i:focus{color:#dadada}}body.colorscheme-dark .tabs label.tab-label{background-color:#424242;border-color:#4f4f4f}body.colorscheme-dark .tabs input.tab-input:checked+label.tab-label{background-color:#212121}body.colorscheme-dark .tabs .tab-content{background-color:#212121;border-color:#4f4f4f}@media(prefers-color-scheme:dark){body.colorscheme-auto .tabs label.tab-label{background-color:#424242;border-color:#4f4f4f}body.colorscheme-auto .tabs input.tab-input:checked+label.tab-label{background-color:#212121}body.colorscheme-auto .tabs .tab-content{background-color:#212121;border-color:#4f4f4f}}body.colorscheme-dark .taxonomy-element{background-color:#424242}body.colorscheme-dark .taxonomy-element a{color:#dadada}body.colorscheme-dark .taxonomy-element a:active{color:#dadada}@media(prefers-color-scheme:dark){body.colorscheme-auto .taxonomy-element{background-color:#424242}body.colorscheme-auto .taxonomy-element a{color:#dadada}body.colorscheme-auto .taxonomy-element a:active{color:#dadada}}body.colorscheme-dark .footer a{color:#42a5f5}@media(prefers-color-scheme:dark){body.colorscheme-auto .footer a{color:#42a5f5}}body.colorscheme-dark .float-container a{color:#dadada;background-color:#424242}body.colorscheme-dark .float-container a:hover,body.colorscheme-dark .float-container a:focus{color:#42a5f5}@media only screen and (max-width:768px){body.colorscheme-dark .float-container a:hover,body.colorscheme-dark .float-container a:focus{color:#dadada}}@media(prefers-color-scheme:dark){body.colorscheme-auto .float-container a{color:#dadada;background-color:#424242}body.colorscheme-auto .float-container a:hover,body.colorscheme-auto .float-container a:focus{color:#42a5f5}}@media only screen and (prefers-color-scheme:dark) and (max-width:768px){body.colorscheme-auto .float-container a:hover,body.colorscheme-auto .float-container a:focus{color:#dadada}} \ No newline at end of file diff --git a/css/coder.min.0669b62fc2c181a12a4ba10be9984e385c9a5e83dc7cb7ae3759ad0b98d7e8b2.css b/css/coder.min.0669b62fc2c181a12a4ba10be9984e385c9a5e83dc7cb7ae3759ad0b98d7e8b2.css new file mode 100644 index 0000000..0697938 --- /dev/null +++ b/css/coder.min.0669b62fc2c181a12a4ba10be9984e385c9a5e83dc7cb7ae3759ad0b98d7e8b2.css @@ -0,0 +1,6 @@ +@charset "UTF-8";/*!normalize.css v8.0.1 | MIT License | github.com/necolas/normalize.css*/html{line-height:1.15;-webkit-text-size-adjust:100%}body{margin:0}main{display:block}h1{font-size:2em;margin:.67em 0}hr{box-sizing:content-box;height:0;overflow:visible}pre{font-family:monospace,monospace;font-size:1em}a{background-color:transparent;word-wrap:break-word}abbr[title]{border-bottom:none;text-decoration:underline;text-decoration:underline dotted}b,strong{font-weight:bolder}code,kbd,samp{font-family:monospace,monospace;font-size:1em}small{font-size:80%}sub,sup{font-size:75%;line-height:0;position:relative;vertical-align:baseline}sub{bottom:-.25em}sup{top:-.5em}img{border-style:none}button,input,optgroup,select,textarea{font-family:inherit;font-size:100%;line-height:1.15;margin:0}button,input{overflow:visible}button,select{text-transform:none}button,[type=button],[type=reset],[type=submit]{-webkit-appearance:button}button::-moz-focus-inner,[type=button]::-moz-focus-inner,[type=reset]::-moz-focus-inner,[type=submit]::-moz-focus-inner{border-style:none;padding:0}button:-moz-focusring,[type=button]:-moz-focusring,[type=reset]:-moz-focusring,[type=submit]:-moz-focusring{outline:1px dotted ButtonText}fieldset{padding:.35em .75em .625em}legend{box-sizing:border-box;color:inherit;display:table;max-width:100%;padding:0;white-space:normal}progress{vertical-align:baseline}textarea{overflow:auto}[type=checkbox],[type=radio]{box-sizing:border-box;padding:0}[type=number]::-webkit-inner-spin-button,[type=number]::-webkit-outer-spin-button{height:auto}[type=search]{-webkit-appearance:textfield;outline-offset:-2px}[type=search]::-webkit-search-decoration{-webkit-appearance:none}::-webkit-file-upload-button{-webkit-appearance:button;font:inherit}details{display:block}summary{display:list-item}template{display:none}[hidden]{display:none}/*!Fork Awesome 1.2.0 +License - https://forkaweso.me/Fork-Awesome/license +Copyright 2018 Dave Gandy & Fork Awesome +Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: +The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. +THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.*/@font-face{font-family:forkawesome;src:url(../fonts/forkawesome-webfont.eot?v=1.2.0);src:url(../fonts/forkawesome-webfont.eot?#iefix&v=1.2.0)format("embedded-opentype"),url(../fonts/forkawesome-webfont.woff2?v=1.2.0)format("woff2"),url(../fonts/forkawesome-webfont.woff?v=1.2.0)format("woff"),url(../fonts/forkawesome-webfont.ttf?v=1.2.0)format("truetype"),url(../fonts/forkawesome-webfont.svg?v=1.2.0#forkawesomeregular)format("svg");font-weight:400;font-style:normal;font-display:block}.fa{display:inline-block;font:14px/1 ForkAwesome;font-size:inherit;text-rendering:auto;-webkit-font-smoothing:antialiased;-moz-osx-font-smoothing:grayscale}.fa-lg{font-size:1.33333333em;line-height:.75em;vertical-align:-15%}.fa-2x{font-size:2em}.fa-3x{font-size:3em}.fa-4x{font-size:4em}.fa-5x{font-size:5em}.fa-fw{width:1.28571429em;text-align:center}.fa-ul{padding-left:0;margin-left:2.14285714em;list-style-type:none}.fa-ul>li{position:relative}.fa-li{position:absolute;left:-2.14285714em;width:2.14285714em;top:.14285714em;text-align:center}.fa-li.fa-lg{left:-1.85714286em}.fa-border{padding:.2em .25em .15em;border:solid .08em #eee;border-radius:.1em}.fa-pull-left{float:left}.fa-pull-right{float:right}.fa.fa-pull-left{margin-right:.3em}.fa.fa-pull-right{margin-left:.3em}.pull-right{float:right}.pull-left{float:left}.fa.pull-left{margin-right:.3em}.fa.pull-right{margin-left:.3em}.fa-spin{-webkit-animation:fa-spin 2s infinite linear;animation:fa-spin 2s infinite linear}.fa-pulse{-webkit-animation:fa-spin 1s infinite steps(8);animation:fa-spin 1s infinite steps(8)}@-webkit-keyframes fa-spin{0%{-webkit-transform:rotate(0);transform:rotate(0)}100%{-webkit-transform:rotate(359deg);transform:rotate(359deg)}}@keyframes fa-spin{0%{-webkit-transform:rotate(0);transform:rotate(0)}100%{-webkit-transform:rotate(359deg);transform:rotate(359deg)}}.fa-rotate-90{-ms-filter:"progid:DXImageTransform.Microsoft.BasicImage(rotation=1)";-webkit-transform:rotate(90deg);-ms-transform:rotate(90deg);transform:rotate(90deg)}.fa-rotate-180{-ms-filter:"progid:DXImageTransform.Microsoft.BasicImage(rotation=2)";-webkit-transform:rotate(180deg);-ms-transform:rotate(180deg);transform:rotate(180deg)}.fa-rotate-270{-ms-filter:"progid:DXImageTransform.Microsoft.BasicImage(rotation=3)";-webkit-transform:rotate(270deg);-ms-transform:rotate(270deg);transform:rotate(270deg)}.fa-flip-horizontal{-ms-filter:"progid:DXImageTransform.Microsoft.BasicImage(rotation=0, mirror=1)";-webkit-transform:scale(-1,1);-ms-transform:scale(-1,1);transform:scale(-1,1)}.fa-flip-vertical{-ms-filter:"progid:DXImageTransform.Microsoft.BasicImage(rotation=2, mirror=1)";-webkit-transform:scale(1,-1);-ms-transform:scale(1,-1);transform:scale(1,-1)}:root .fa-rotate-90,:root .fa-rotate-180,:root .fa-rotate-270,:root .fa-flip-horizontal,:root .fa-flip-vertical{filter:none}.fa-stack{position:relative;display:inline-block;width:2em;height:2em;line-height:2em;vertical-align:middle}.fa-stack-1x,.fa-stack-2x{position:absolute;left:0;width:100%;text-align:center}.fa-stack-1x{line-height:inherit}.fa-stack-2x{font-size:2em}.fa-inverse{color:#fff}.fa-glass:before{content:"\f000"}.fa-music:before{content:"\f001"}.fa-search:before{content:"\f002"}.fa-envelope-o:before{content:"\f003"}.fa-heart:before{content:"\f004"}.fa-star:before{content:"\f005"}.fa-star-o:before{content:"\f006"}.fa-user:before{content:"\f007"}.fa-film:before{content:"\f008"}.fa-th-large:before{content:"\f009"}.fa-th:before{content:"\f00a"}.fa-th-list:before{content:"\f00b"}.fa-check:before{content:"\f00c"}.fa-remove:before,.fa-close:before,.fa-times:before{content:"\f00d"}.fa-search-plus:before{content:"\f00e"}.fa-search-minus:before{content:"\f010"}.fa-power-off:before{content:"\f011"}.fa-signal:before{content:"\f012"}.fa-gear:before,.fa-cog:before{content:"\f013"}.fa-trash-o:before{content:"\f014"}.fa-home:before{content:"\f015"}.fa-file-o:before{content:"\f016"}.fa-clock-o:before{content:"\f017"}.fa-road:before{content:"\f018"}.fa-download:before{content:"\f019"}.fa-arrow-circle-o-down:before{content:"\f01a"}.fa-arrow-circle-o-up:before{content:"\f01b"}.fa-inbox:before{content:"\f01c"}.fa-play-circle-o:before{content:"\f01d"}.fa-rotate-right:before,.fa-repeat:before{content:"\f01e"}.fa-sync:before,.fa-refresh:before{content:"\f021"}.fa-list-alt:before{content:"\f022"}.fa-lock:before{content:"\f023"}.fa-flag:before{content:"\f024"}.fa-headphones:before{content:"\f025"}.fa-volume-off:before{content:"\f026"}.fa-volume-down:before{content:"\f027"}.fa-volume-up:before{content:"\f028"}.fa-qrcode:before{content:"\f029"}.fa-barcode:before{content:"\f02a"}.fa-tag:before{content:"\f02b"}.fa-tags:before{content:"\f02c"}.fa-book:before{content:"\f02d"}.fa-bookmark:before{content:"\f02e"}.fa-print:before{content:"\f02f"}.fa-camera:before{content:"\f030"}.fa-font:before{content:"\f031"}.fa-bold:before{content:"\f032"}.fa-italic:before{content:"\f033"}.fa-text-height:before{content:"\f034"}.fa-text-width:before{content:"\f035"}.fa-align-left:before{content:"\f036"}.fa-align-center:before{content:"\f037"}.fa-align-right:before{content:"\f038"}.fa-align-justify:before{content:"\f039"}.fa-list:before{content:"\f03a"}.fa-dedent:before,.fa-outdent:before{content:"\f03b"}.fa-indent:before{content:"\f03c"}.fa-video:before,.fa-video-camera:before{content:"\f03d"}.fa-photo:before,.fa-image:before,.fa-picture-o:before{content:"\f03e"}.fa-pencil:before{content:"\f040"}.fa-map-marker:before{content:"\f041"}.fa-adjust:before{content:"\f042"}.fa-tint:before{content:"\f043"}.fa-edit:before,.fa-pencil-square-o:before{content:"\f044"}.fa-share-square-o:before{content:"\f045"}.fa-check-square-o:before{content:"\f046"}.fa-arrows:before{content:"\f047"}.fa-step-backward:before{content:"\f048"}.fa-fast-backward:before{content:"\f049"}.fa-backward:before{content:"\f04a"}.fa-play:before{content:"\f04b"}.fa-pause:before{content:"\f04c"}.fa-stop:before{content:"\f04d"}.fa-forward:before{content:"\f04e"}.fa-fast-forward:before{content:"\f050"}.fa-step-forward:before{content:"\f051"}.fa-eject:before{content:"\f052"}.fa-chevron-left:before{content:"\f053"}.fa-chevron-right:before{content:"\f054"}.fa-plus-circle:before{content:"\f055"}.fa-minus-circle:before{content:"\f056"}.fa-times-circle:before{content:"\f057"}.fa-check-circle:before{content:"\f058"}.fa-question-circle:before{content:"\f059"}.fa-info-circle:before{content:"\f05a"}.fa-crosshairs:before{content:"\f05b"}.fa-times-circle-o:before{content:"\f05c"}.fa-check-circle-o:before{content:"\f05d"}.fa-ban:before{content:"\f05e"}.fa-arrow-left:before{content:"\f060"}.fa-arrow-right:before{content:"\f061"}.fa-arrow-up:before{content:"\f062"}.fa-arrow-down:before{content:"\f063"}.fa-mail-forward:before,.fa-share:before{content:"\f064"}.fa-expand:before{content:"\f065"}.fa-compress:before{content:"\f066"}.fa-plus:before{content:"\f067"}.fa-minus:before{content:"\f068"}.fa-asterisk:before{content:"\f069"}.fa-exclamation-circle:before{content:"\f06a"}.fa-gift:before{content:"\f06b"}.fa-leaf:before{content:"\f06c"}.fa-fire:before{content:"\f06d"}.fa-eye:before{content:"\f06e"}.fa-eye-slash:before{content:"\f070"}.fa-warning:before,.fa-exclamation-triangle:before{content:"\f071"}.fa-plane:before{content:"\f072"}.fa-calendar:before{content:"\f073"}.fa-random:before{content:"\f074"}.fa-comment:before{content:"\f075"}.fa-magnet:before{content:"\f076"}.fa-chevron-up:before{content:"\f077"}.fa-chevron-down:before{content:"\f078"}.fa-retweet:before{content:"\f079"}.fa-shopping-cart:before{content:"\f07a"}.fa-folder:before{content:"\f07b"}.fa-folder-open:before{content:"\f07c"}.fa-arrows-v:before{content:"\f07d"}.fa-arrows-h:before{content:"\f07e"}.fa-bar-chart-o:before,.fa-bar-chart:before{content:"\f080"}.fa-twitter-square:before{content:"\f081"}.fa-facebook-square:before{content:"\f082"}.fa-camera-retro:before{content:"\f083"}.fa-key:before{content:"\f084"}.fa-gears:before,.fa-cogs:before{content:"\f085"}.fa-comments:before{content:"\f086"}.fa-thumbs-o-up:before{content:"\f087"}.fa-thumbs-o-down:before{content:"\f088"}.fa-star-half:before{content:"\f089"}.fa-heart-o:before{content:"\f08a"}.fa-sign-out:before{content:"\f08b"}.fa-linkedin-square:before{content:"\f08c"}.fa-thumb-tack:before{content:"\f08d"}.fa-external-link:before{content:"\f08e"}.fa-sign-in:before{content:"\f090"}.fa-trophy:before{content:"\f091"}.fa-github-square:before{content:"\f092"}.fa-upload:before{content:"\f093"}.fa-lemon-o:before{content:"\f094"}.fa-phone:before{content:"\f095"}.fa-square-o:before{content:"\f096"}.fa-bookmark-o:before{content:"\f097"}.fa-phone-square:before{content:"\f098"}.fa-twitter:before{content:"\f099"}.fa-facebook-f:before,.fa-facebook:before{content:"\f09a"}.fa-github:before{content:"\f09b"}.fa-unlock:before{content:"\f09c"}.fa-credit-card:before{content:"\f09d"}.fa-feed:before,.fa-rss:before{content:"\f09e"}.fa-hdd-o:before{content:"\f0a0"}.fa-bullhorn:before{content:"\f0a1"}.fa-bell-o:before{content:"\f0f3"}.fa-certificate:before{content:"\f0a3"}.fa-hand-o-right:before{content:"\f0a4"}.fa-hand-o-left:before{content:"\f0a5"}.fa-hand-o-up:before{content:"\f0a6"}.fa-hand-o-down:before{content:"\f0a7"}.fa-arrow-circle-left:before{content:"\f0a8"}.fa-arrow-circle-right:before{content:"\f0a9"}.fa-arrow-circle-up:before{content:"\f0aa"}.fa-arrow-circle-down:before{content:"\f0ab"}.fa-globe:before{content:"\f0ac"}.fa-globe-e:before{content:"\f304"}.fa-globe-w:before{content:"\f305"}.fa-wrench:before{content:"\f0ad"}.fa-tasks:before{content:"\f0ae"}.fa-filter:before{content:"\f0b0"}.fa-briefcase:before{content:"\f0b1"}.fa-arrows-alt:before{content:"\f0b2"}.fa-community:before,.fa-group:before,.fa-users:before{content:"\f0c0"}.fa-chain:before,.fa-link:before{content:"\f0c1"}.fa-cloud:before{content:"\f0c2"}.fa-flask:before{content:"\f0c3"}.fa-cut:before,.fa-scissors:before{content:"\f0c4"}.fa-copy:before,.fa-files-o:before{content:"\f0c5"}.fa-paperclip:before{content:"\f0c6"}.fa-save:before,.fa-floppy-o:before{content:"\f0c7"}.fa-square:before{content:"\f0c8"}.fa-navicon:before,.fa-reorder:before,.fa-bars:before{content:"\f0c9"}.fa-list-ul:before{content:"\f0ca"}.fa-list-ol:before{content:"\f0cb"}.fa-strikethrough:before{content:"\f0cc"}.fa-underline:before{content:"\f0cd"}.fa-table:before{content:"\f0ce"}.fa-magic:before{content:"\f0d0"}.fa-truck:before{content:"\f0d1"}.fa-pinterest:before{content:"\f0d2"}.fa-pinterest-square:before{content:"\f0d3"}.fa-google-plus-square:before{content:"\f0d4"}.fa-google-plus-g:before,.fa-google-plus:before{content:"\f0d5"}.fa-money:before{content:"\f0d6"}.fa-caret-down:before{content:"\f0d7"}.fa-caret-up:before{content:"\f0d8"}.fa-caret-left:before{content:"\f0d9"}.fa-caret-right:before{content:"\f0da"}.fa-columns:before{content:"\f0db"}.fa-unsorted:before,.fa-sort:before{content:"\f0dc"}.fa-sort-down:before,.fa-sort-desc:before{content:"\f0dd"}.fa-sort-up:before,.fa-sort-asc:before{content:"\f0de"}.fa-envelope:before{content:"\f0e0"}.fa-linkedin:before{content:"\f0e1"}.fa-rotate-left:before,.fa-undo:before{content:"\f0e2"}.fa-legal:before,.fa-gavel:before{content:"\f0e3"}.fa-dashboard:before,.fa-tachometer:before{content:"\f0e4"}.fa-comment-o:before{content:"\f0e5"}.fa-comments-o:before{content:"\f0e6"}.fa-flash:before,.fa-bolt:before{content:"\f0e7"}.fa-sitemap:before{content:"\f0e8"}.fa-umbrella:before{content:"\f0e9"}.fa-paste:before,.fa-clipboard:before{content:"\f0ea"}.fa-lightbulb-o:before{content:"\f0eb"}.fa-exchange:before{content:"\f0ec"}.fa-cloud-download:before{content:"\f0ed"}.fa-cloud-upload:before{content:"\f0ee"}.fa-user-md:before{content:"\f0f0"}.fa-stethoscope:before{content:"\f0f1"}.fa-suitcase:before{content:"\f0f2"}.fa-bell:before{content:"\f0a2"}.fa-coffee:before{content:"\f0f4"}.fa-utensils:before,.fa-cutlery:before{content:"\f0f5"}.fa-file-text-o:before{content:"\f0f6"}.fa-building-o:before{content:"\f0f7"}.fa-hospital-o:before{content:"\f0f8"}.fa-ambulance:before{content:"\f0f9"}.fa-medkit:before{content:"\f0fa"}.fa-fighter-jet:before{content:"\f0fb"}.fa-beer:before{content:"\f0fc"}.fa-h-square:before{content:"\f0fd"}.fa-plus-square:before{content:"\f0fe"}.fa-angle-double-left:before{content:"\f100"}.fa-angle-double-right:before{content:"\f101"}.fa-angle-double-up:before{content:"\f102"}.fa-angle-double-down:before{content:"\f103"}.fa-angle-left:before{content:"\f104"}.fa-angle-right:before{content:"\f105"}.fa-angle-up:before{content:"\f106"}.fa-angle-down:before{content:"\f107"}.fa-desktop:before{content:"\f108"}.fa-laptop:before{content:"\f109"}.fa-tablet:before{content:"\f10a"}.fa-mobile-phone:before,.fa-mobile:before{content:"\f10b"}.fa-circle-o:before{content:"\f10c"}.fa-quote-left:before{content:"\f10d"}.fa-quote-right:before{content:"\f10e"}.fa-spinner:before{content:"\f110"}.fa-circle:before{content:"\f111"}.fa-mail-reply:before,.fa-reply:before{content:"\f112"}.fa-github-alt:before{content:"\f113"}.fa-folder-o:before{content:"\f114"}.fa-folder-open-o:before{content:"\f115"}.fa-smile-o:before{content:"\f118"}.fa-frown-o:before{content:"\f119"}.fa-meh-o:before{content:"\f11a"}.fa-gamepad:before{content:"\f11b"}.fa-keyboard-o:before{content:"\f11c"}.fa-flag-o:before{content:"\f11d"}.fa-flag-checkered:before{content:"\f11e"}.fa-terminal:before{content:"\f120"}.fa-code:before{content:"\f121"}.fa-mail-reply-all:before,.fa-reply-all:before{content:"\f122"}.fa-star-half-empty:before,.fa-star-half-full:before,.fa-star-half-o:before{content:"\f123"}.fa-location-arrow:before{content:"\f124"}.fa-crop:before{content:"\f125"}.fa-code-fork:before{content:"\f126"}.fa-unlink:before,.fa-chain-broken:before{content:"\f127"}.fa-question:before{content:"\f128"}.fa-info:before{content:"\f129"}.fa-exclamation:before{content:"\f12a"}.fa-superscript:before{content:"\f12b"}.fa-subscript:before{content:"\f12c"}.fa-eraser:before{content:"\f12d"}.fa-puzzle-piece:before{content:"\f12e"}.fa-microphone:before{content:"\f130"}.fa-microphone-slash:before{content:"\f131"}.fa-shield:before{content:"\f132"}.fa-calendar-o:before{content:"\f133"}.fa-fire-extinguisher:before{content:"\f134"}.fa-rocket:before{content:"\f135"}.fa-maxcdn:before{content:"\f136"}.fa-chevron-circle-left:before{content:"\f137"}.fa-chevron-circle-right:before{content:"\f138"}.fa-chevron-circle-up:before{content:"\f139"}.fa-chevron-circle-down:before{content:"\f13a"}.fa-html5:before{content:"\f13b"}.fa-css3:before{content:"\f13c"}.fa-anchor:before{content:"\f13d"}.fa-unlock-alt:before{content:"\f13e"}.fa-bullseye:before{content:"\f140"}.fa-ellipsis-h:before{content:"\f141"}.fa-ellipsis-v:before{content:"\f142"}.fa-rss-square:before{content:"\f143"}.fa-play-circle:before{content:"\f144"}.fa-ticket:before{content:"\f145"}.fa-minus-square:before{content:"\f146"}.fa-minus-square-o:before{content:"\f147"}.fa-level-up:before{content:"\f148"}.fa-level-down:before{content:"\f149"}.fa-check-square:before{content:"\f14a"}.fa-pencil-square:before{content:"\f14b"}.fa-external-link-square:before{content:"\f14c"}.fa-share-square:before{content:"\f14d"}.fa-compass:before{content:"\f14e"}.fa-toggle-down:before,.fa-caret-square-o-down:before{content:"\f150"}.fa-toggle-up:before,.fa-caret-square-o-up:before{content:"\f151"}.fa-toggle-right:before,.fa-caret-square-o-right:before{content:"\f152"}.fa-euro:before,.fa-eur:before{content:"\f153"}.fa-pound:before,.fa-gbp:before{content:"\f154"}.fa-dollar:before,.fa-usd:before{content:"\f155"}.fa-rupee:before,.fa-inr:before{content:"\f156"}.fa-cny:before,.fa-rmb:before,.fa-yen:before,.fa-jpy:before{content:"\f157"}.fa-ruble:before,.fa-rouble:before,.fa-rub:before{content:"\f158"}.fa-won:before,.fa-krw:before{content:"\f159"}.fa-bitcoin:before,.fa-btc:before{content:"\f15a"}.fa-file:before{content:"\f15b"}.fa-file-text:before{content:"\f15c"}.fa-sort-alpha-down:before,.fa-sort-alpha-asc:before{content:"\f15d"}.fa-sort-alpha-up:before,.fa-sort-alpha-desc:before{content:"\f15e"}.fa-sort-amount-down:before,.fa-sort-amount-asc:before{content:"\f160"}.fa-sort-amount-up:before,.fa-sort-amount-desc:before{content:"\f161"}.fa-sort-numeric-down:before,.fa-sort-numeric-asc:before{content:"\f162"}.fa-sort-numeric-up:before,.fa-sort-numeric-desc:before{content:"\f163"}.fa-thumbs-up:before{content:"\f164"}.fa-thumbs-down:before{content:"\f165"}.fa-youtube-square:before{content:"\f166"}.fa-youtube:before{content:"\f167"}.fa-xing:before{content:"\f168"}.fa-xing-square:before{content:"\f169"}.fa-youtube-play:before{content:"\f16a"}.fa-dropbox:before{content:"\f16b"}.fa-stack-overflow:before{content:"\f16c"}.fa-instagram:before{content:"\f16d"}.fa-flickr:before{content:"\f16e"}.fa-adn:before{content:"\f170"}.fa-bitbucket:before{content:"\f171"}.fa-bitbucket-square:before{content:"\f172"}.fa-tumblr:before{content:"\f173"}.fa-tumblr-square:before{content:"\f174"}.fa-long-arrow-down:before{content:"\f175"}.fa-long-arrow-up:before{content:"\f176"}.fa-long-arrow-left:before{content:"\f177"}.fa-long-arrow-right:before{content:"\f178"}.fa-apple:before{content:"\f179"}.fa-windows:before{content:"\f17a"}.fa-android:before{content:"\f17b"}.fa-linux:before{content:"\f17c"}.fa-dribbble:before{content:"\f17d"}.fa-skype:before{content:"\f17e"}.fa-foursquare:before{content:"\f180"}.fa-trello:before{content:"\f181"}.fa-female:before{content:"\f182"}.fa-male:before{content:"\f183"}.fa-gittip:before,.fa-gratipay:before{content:"\f184"}.fa-sun-o:before{content:"\f185"}.fa-moon-o:before{content:"\f186"}.fa-archive:before{content:"\f187"}.fa-bug:before{content:"\f188"}.fa-vk:before{content:"\f189"}.fa-weibo:before{content:"\f18a"}.fa-renren:before{content:"\f18b"}.fa-pagelines:before{content:"\f18c"}.fa-stack-exchange:before{content:"\f18d"}.fa-arrow-circle-o-right:before{content:"\f18e"}.fa-arrow-circle-o-left:before{content:"\f190"}.fa-toggle-left:before,.fa-caret-square-o-left:before{content:"\f191"}.fa-dot-circle-o:before{content:"\f192"}.fa-wheelchair:before{content:"\f193"}.fa-vimeo-square:before{content:"\f194"}.fa-turkish-lira:before,.fa-try:before{content:"\f195"}.fa-plus-square-o:before{content:"\f196"}.fa-space-shuttle:before{content:"\f197"}.fa-slack:before{content:"\f198"}.fa-envelope-square:before{content:"\f199"}.fa-wordpress:before{content:"\f19a"}.fa-openid:before{content:"\f19b"}.fa-institution:before,.fa-bank:before,.fa-university:before{content:"\f19c"}.fa-mortar-board:before,.fa-graduation-cap:before{content:"\f19d"}.fa-yahoo:before{content:"\f19e"}.fa-google:before{content:"\f1a0"}.fa-reddit:before{content:"\f1a1"}.fa-reddit-square:before{content:"\f1a2"}.fa-stumbleupon-circle:before{content:"\f1a3"}.fa-stumbleupon:before{content:"\f1a4"}.fa-delicious:before{content:"\f1a5"}.fa-digg:before{content:"\f1a6"}.fa-drupal:before{content:"\f1a9"}.fa-joomla:before{content:"\f1aa"}.fa-language:before{content:"\f1ab"}.fa-fax:before{content:"\f1ac"}.fa-building:before{content:"\f1ad"}.fa-child:before{content:"\f1ae"}.fa-paw:before{content:"\f1b0"}.fa-utensil-spoon:before,.fa-spoon:before{content:"\f1b1"}.fa-cube:before{content:"\f1b2"}.fa-cubes:before{content:"\f1b3"}.fa-behance:before{content:"\f1b4"}.fa-behance-square:before{content:"\f1b5"}.fa-steam:before{content:"\f1b6"}.fa-steam-square:before{content:"\f1b7"}.fa-recycle:before{content:"\f1b8"}.fa-automobile:before,.fa-car:before{content:"\f1b9"}.fa-cab:before,.fa-taxi:before{content:"\f1ba"}.fa-tree:before{content:"\f1bb"}.fa-spotify:before{content:"\f1bc"}.fa-deviantart:before{content:"\f1bd"}.fa-soundcloud:before{content:"\f1be"}.fa-database:before{content:"\f1c0"}.fa-file-pdf-o:before{content:"\f1c1"}.fa-file-word-o:before{content:"\f1c2"}.fa-file-excel-o:before{content:"\f1c3"}.fa-file-powerpoint-o:before{content:"\f1c4"}.fa-file-photo-o:before,.fa-file-picture-o:before,.fa-file-image-o:before{content:"\f1c5"}.fa-file-zip-o:before,.fa-file-archive-o:before{content:"\f1c6"}.fa-file-sound-o:before,.fa-file-audio-o:before{content:"\f1c7"}.fa-file-movie-o:before,.fa-file-video-o:before{content:"\f1c8"}.fa-file-code-o:before{content:"\f1c9"}.fa-vine:before{content:"\f1ca"}.fa-codepen:before{content:"\f1cb"}.fa-jsfiddle:before{content:"\f1cc"}.fa-life-bouy:before,.fa-life-buoy:before,.fa-life-saver:before,.fa-support:before,.fa-life-ring:before{content:"\f1cd"}.fa-circle-o-notch:before{content:"\f1ce"}.fa-ra:before,.fa-resistance:before,.fa-rebel:before{content:"\f1d0"}.fa-ge:before,.fa-empire:before{content:"\f1d1"}.fa-git-square:before{content:"\f1d2"}.fa-git:before{content:"\f1d3"}.fa-y-combinator-square:before,.fa-yc-square:before,.fa-hacker-news:before{content:"\f1d4"}.fa-tencent-weibo:before{content:"\f1d5"}.fa-qq:before{content:"\f1d6"}.fa-wechat:before,.fa-weixin:before{content:"\f1d7"}.fa-send:before,.fa-paper-plane:before{content:"\f1d8"}.fa-send-o:before,.fa-paper-plane-o:before{content:"\f1d9"}.fa-history:before{content:"\f1da"}.fa-circle-thin:before{content:"\f1db"}.fa-heading:before,.fa-header:before{content:"\f1dc"}.fa-paragraph:before{content:"\f1dd"}.fa-sliders:before{content:"\f1de"}.fa-share-alt:before{content:"\f1e0"}.fa-share-alt-square:before{content:"\f1e1"}.fa-bomb:before{content:"\f1e2"}.fa-soccer-ball-o:before,.fa-futbol-o:before{content:"\f1e3"}.fa-tty:before{content:"\f1e4"}.fa-binoculars:before{content:"\f1e5"}.fa-plug:before{content:"\f1e6"}.fa-slideshare:before{content:"\f1e7"}.fa-twitch:before{content:"\f1e8"}.fa-yelp:before{content:"\f1e9"}.fa-newspaper-o:before{content:"\f1ea"}.fa-wifi:before{content:"\f1eb"}.fa-calculator:before{content:"\f1ec"}.fa-paypal:before{content:"\f1ed"}.fa-google-wallet:before{content:"\f1ee"}.fa-cc-visa:before{content:"\f1f0"}.fa-cc-mastercard:before{content:"\f1f1"}.fa-cc-discover:before{content:"\f1f2"}.fa-cc-amex:before{content:"\f1f3"}.fa-cc-paypal:before{content:"\f1f4"}.fa-cc-stripe:before{content:"\f1f5"}.fa-bell-slash:before{content:"\f1f6"}.fa-bell-slash-o:before{content:"\f1f7"}.fa-trash:before{content:"\f1f8"}.fa-copyright:before{content:"\f1f9"}.fa-at:before{content:"\f1fa"}.fa-eyedropper:before{content:"\f1fb"}.fa-paint-brush:before{content:"\f1fc"}.fa-birthday-cake:before{content:"\f1fd"}.fa-area-chart:before{content:"\f1fe"}.fa-pie-chart:before{content:"\f200"}.fa-line-chart:before{content:"\f201"}.fa-lastfm:before{content:"\f202"}.fa-lastfm-square:before{content:"\f203"}.fa-toggle-off:before{content:"\f204"}.fa-toggle-on:before{content:"\f205"}.fa-bicycle:before{content:"\f206"}.fa-bus:before{content:"\f207"}.fa-ioxhost:before{content:"\f208"}.fa-angellist:before{content:"\f209"}.fa-closed-captioning:before,.fa-cc:before{content:"\f20a"}.fa-shekel:before,.fa-sheqel:before,.fa-ils:before{content:"\f20b"}.fa-meanpath:before{content:"\f20c"}.fa-buysellads:before{content:"\f20d"}.fa-connectdevelop:before{content:"\f20e"}.fa-dashcube:before{content:"\f210"}.fa-forumbee:before{content:"\f211"}.fa-leanpub:before{content:"\f212"}.fa-sellsy:before{content:"\f213"}.fa-shirtsinbulk:before{content:"\f214"}.fa-simplybuilt:before{content:"\f215"}.fa-skyatlas:before{content:"\f216"}.fa-cart-plus:before{content:"\f217"}.fa-cart-arrow-down:before{content:"\f218"}.fa-gem:before,.fa-diamond:before{content:"\f219"}.fa-ship:before{content:"\f21a"}.fa-user-secret:before{content:"\f21b"}.fa-motorcycle:before{content:"\f21c"}.fa-street-view:before{content:"\f21d"}.fa-heartbeat:before{content:"\f21e"}.fa-venus:before{content:"\f221"}.fa-mars:before{content:"\f222"}.fa-mercury:before{content:"\f223"}.fa-intersex:before,.fa-transgender:before{content:"\f224"}.fa-transgender-alt:before{content:"\f225"}.fa-venus-double:before{content:"\f226"}.fa-mars-double:before{content:"\f227"}.fa-venus-mars:before{content:"\f228"}.fa-mars-stroke:before{content:"\f229"}.fa-mars-stroke-v:before{content:"\f22a"}.fa-mars-stroke-h:before{content:"\f22b"}.fa-neuter:before{content:"\f22c"}.fa-genderless:before{content:"\f22d"}.fa-facebook-official:before{content:"\f230"}.fa-pinterest-p:before{content:"\f231"}.fa-whatsapp:before{content:"\f232"}.fa-server:before{content:"\f233"}.fa-user-plus:before{content:"\f234"}.fa-user-times:before{content:"\f235"}.fa-hotel:before,.fa-bed:before{content:"\f236"}.fa-viacoin:before{content:"\f237"}.fa-train:before{content:"\f238"}.fa-subway:before{content:"\f239"}.fa-medium:before{content:"\f23a"}.fa-medium-square:before{content:"\f2f8"}.fa-yc:before,.fa-y-combinator:before{content:"\f23b"}.fa-optin-monster:before{content:"\f23c"}.fa-opencart:before{content:"\f23d"}.fa-expeditedssl:before{content:"\f23e"}.fa-battery-4:before,.fa-battery:before,.fa-battery-full:before{content:"\f240"}.fa-battery-3:before,.fa-battery-three-quarters:before{content:"\f241"}.fa-battery-2:before,.fa-battery-half:before{content:"\f242"}.fa-battery-1:before,.fa-battery-quarter:before{content:"\f243"}.fa-battery-0:before,.fa-battery-empty:before{content:"\f244"}.fa-mouse-pointer:before{content:"\f245"}.fa-i-cursor:before{content:"\f246"}.fa-object-group:before{content:"\f247"}.fa-object-ungroup:before{content:"\f248"}.fa-sticky-note:before{content:"\f249"}.fa-sticky-note-o:before{content:"\f24a"}.fa-cc-jcb:before{content:"\f24b"}.fa-cc-diners-club:before{content:"\f24c"}.fa-clone:before{content:"\f24d"}.fa-balance-scale:before{content:"\f24e"}.fa-hourglass-o:before{content:"\f250"}.fa-hourglass-1:before,.fa-hourglass-start:before{content:"\f251"}.fa-hourglass-2:before,.fa-hourglass-half:before{content:"\f252"}.fa-hourglass-3:before,.fa-hourglass-end:before{content:"\f253"}.fa-hourglass:before{content:"\f254"}.fa-hand-grab-o:before,.fa-hand-rock-o:before{content:"\f255"}.fa-hand-stop-o:before,.fa-hand-paper-o:before{content:"\f256"}.fa-hand-scissors-o:before{content:"\f257"}.fa-hand-lizard-o:before{content:"\f258"}.fa-hand-spock-o:before{content:"\f259"}.fa-hand-pointer-o:before{content:"\f25a"}.fa-hand-peace-o:before{content:"\f25b"}.fa-trademark:before{content:"\f25c"}.fa-registered:before{content:"\f25d"}.fa-creative-commons:before{content:"\f25e"}.fa-gg:before{content:"\f260"}.fa-gg-circle:before{content:"\f261"}.fa-tripadvisor:before{content:"\f262"}.fa-odnoklassniki:before{content:"\f263"}.fa-odnoklassniki-square:before{content:"\f264"}.fa-get-pocket:before{content:"\f265"}.fa-wikipedia-w:before{content:"\f266"}.fa-safari:before{content:"\f267"}.fa-chrome:before{content:"\f268"}.fa-firefox:before{content:"\f269"}.fa-opera:before{content:"\f26a"}.fa-internet-explorer:before{content:"\f26b"}.fa-tv:before,.fa-television:before{content:"\f26c"}.fa-contao:before{content:"\f26d"}.fa-500px:before{content:"\f26e"}.fa-amazon:before{content:"\f270"}.fa-calendar-plus-o:before{content:"\f271"}.fa-calendar-minus-o:before{content:"\f272"}.fa-calendar-times-o:before{content:"\f273"}.fa-calendar-check-o:before{content:"\f274"}.fa-industry:before{content:"\f275"}.fa-map-pin:before{content:"\f276"}.fa-map-signs:before{content:"\f277"}.fa-map-o:before{content:"\f278"}.fa-map:before{content:"\f279"}.fa-commenting:before{content:"\f27a"}.fa-commenting-o:before{content:"\f27b"}.fa-houzz:before{content:"\f27c"}.fa-vimeo-v:before,.fa-vimeo:before{content:"\f27d"}.fa-black-tie:before{content:"\f27e"}.fa-fonticons:before{content:"\f280"}.fa-reddit-alien:before{content:"\f281"}.fa-edge:before{content:"\f282"}.fa-credit-card-alt:before{content:"\f283"}.fa-codiepie:before{content:"\f284"}.fa-modx:before{content:"\f285"}.fa-fort-awesome:before{content:"\f286"}.fa-usb:before{content:"\f287"}.fa-product-hunt:before{content:"\f288"}.fa-mixcloud:before{content:"\f289"}.fa-scribd:before{content:"\f28a"}.fa-pause-circle:before{content:"\f28b"}.fa-pause-circle-o:before{content:"\f28c"}.fa-stop-circle:before{content:"\f28d"}.fa-stop-circle-o:before{content:"\f28e"}.fa-shopping-bag:before{content:"\f290"}.fa-shopping-basket:before{content:"\f291"}.fa-hashtag:before{content:"\f292"}.fa-bluetooth:before{content:"\f293"}.fa-bluetooth-b:before{content:"\f294"}.fa-percent:before{content:"\f295"}.fa-gitlab:before{content:"\f296"}.fa-wpbeginner:before{content:"\f297"}.fa-wpforms:before{content:"\f298"}.fa-envira:before{content:"\f299"}.fa-universal-access:before{content:"\f29a"}.fa-wheelchair-alt:before{content:"\f29b"}.fa-question-circle-o:before{content:"\f29c"}.fa-blind:before{content:"\f29d"}.fa-audio-description:before{content:"\f29e"}.fa-phone-volume:before,.fa-volume-control-phone:before{content:"\f2a0"}.fa-braille:before{content:"\f2a1"}.fa-assistive-listening-systems:before{content:"\f2a2"}.fa-asl-interpreting:before,.fa-american-sign-language-interpreting:before{content:"\f2a3"}.fa-deafness:before,.fa-hard-of-hearing:before,.fa-deaf:before{content:"\f2a4"}.fa-glide:before{content:"\f2a5"}.fa-glide-g:before{content:"\f2a6"}.fa-signing:before,.fa-sign-language:before{content:"\f2a7"}.fa-low-vision:before{content:"\f2a8"}.fa-viadeo:before{content:"\f2a9"}.fa-viadeo-square:before{content:"\f2aa"}.fa-snapchat:before{content:"\f2ab"}.fa-snapchat-ghost:before{content:"\f2ac"}.fa-snapchat-square:before{content:"\f2ad"}.fa-first-order:before{content:"\f2b0"}.fa-yoast:before{content:"\f2b1"}.fa-themeisle:before{content:"\f2b2"}.fa-google-plus-circle:before,.fa-google-plus-official:before{content:"\f2b3"}.fa-fa:before,.fa-font-awesome:before{content:"\f2b4"}.fa-handshake-o:before{content:"\f2b5"}.fa-envelope-open:before{content:"\f2b6"}.fa-envelope-open-o:before{content:"\f2b7"}.fa-linode:before{content:"\f2b8"}.fa-address-book:before{content:"\f2b9"}.fa-address-book-o:before{content:"\f2ba"}.fa-vcard:before,.fa-address-card:before{content:"\f2bb"}.fa-vcard-o:before,.fa-address-card-o:before{content:"\f2bc"}.fa-user-circle:before{content:"\f2bd"}.fa-user-circle-o:before{content:"\f2be"}.fa-user-o:before{content:"\f2c0"}.fa-id-badge:before{content:"\f2c1"}.fa-drivers-license:before,.fa-id-card:before{content:"\f2c2"}.fa-drivers-license-o:before,.fa-id-card-o:before{content:"\f2c3"}.fa-quora:before{content:"\f2c4"}.fa-free-code-camp:before{content:"\f2c5"}.fa-telegram:before{content:"\f2c6"}.fa-thermometer-4:before,.fa-thermometer:before,.fa-thermometer-full:before{content:"\f2c7"}.fa-thermometer-3:before,.fa-thermometer-three-quarters:before{content:"\f2c8"}.fa-thermometer-2:before,.fa-thermometer-half:before{content:"\f2c9"}.fa-thermometer-1:before,.fa-thermometer-quarter:before{content:"\f2ca"}.fa-thermometer-0:before,.fa-thermometer-empty:before{content:"\f2cb"}.fa-shower:before{content:"\f2cc"}.fa-bathtub:before,.fa-s15:before,.fa-bath:before{content:"\f2cd"}.fa-podcast:before{content:"\f2ce"}.fa-window-maximize:before{content:"\f2d0"}.fa-window-minimize:before{content:"\f2d1"}.fa-window-restore:before{content:"\f2d2"}.fa-times-rectangle:before,.fa-window-close:before{content:"\f2d3"}.fa-times-rectangle-o:before,.fa-window-close-o:before{content:"\f2d4"}.fa-bandcamp:before{content:"\f2d5"}.fa-grav:before{content:"\f2d6"}.fa-etsy:before{content:"\f2d7"}.fa-imdb:before{content:"\f2d8"}.fa-ravelry:before{content:"\f2d9"}.fa-eercast:before{content:"\f2da"}.fa-microchip:before{content:"\f2db"}.fa-snowflake-o:before{content:"\f2dc"}.fa-superpowers:before{content:"\f2dd"}.fa-wpexplorer:before{content:"\f2de"}.fa-meetup:before{content:"\f2e0"}.fa-mastodon:before{content:"\f2e1"}.fa-mastodon-alt:before{content:"\f2e2"}.fa-fork-circle:before,.fa-fork-awesome:before{content:"\f2e3"}.fa-peertube:before{content:"\f2e4"}.fa-diaspora:before{content:"\f2e5"}.fa-friendica:before{content:"\f2e6"}.fa-gnu-social:before{content:"\f2e7"}.fa-liberapay-square:before{content:"\f2e8"}.fa-liberapay:before{content:"\f2e9"}.fa-ssb:before,.fa-scuttlebutt:before{content:"\f2ea"}.fa-hubzilla:before{content:"\f2eb"}.fa-social-home:before{content:"\f2ec"}.fa-artstation:before{content:"\f2ed"}.fa-discord:before{content:"\f2ee"}.fa-discord-alt:before{content:"\f2ef"}.fa-patreon:before{content:"\f2f0"}.fa-snowdrift:before{content:"\f2f1"}.fa-activitypub:before{content:"\f2f2"}.fa-ethereum:before{content:"\f2f3"}.fa-keybase:before{content:"\f2f4"}.fa-shaarli:before{content:"\f2f5"}.fa-shaarli-o:before{content:"\f2f6"}.fa-cut-key:before,.fa-key-modern:before{content:"\f2f7"}.fa-xmpp:before{content:"\f2f9"}.fa-archive-org:before{content:"\f2fc"}.fa-freedombox:before{content:"\f2fd"}.fa-facebook-messenger:before{content:"\f2fe"}.fa-debian:before{content:"\f2ff"}.fa-mastodon-square:before{content:"\f300"}.fa-tipeee:before{content:"\f301"}.fa-react:before{content:"\f302"}.fa-dogmazic:before{content:"\f303"}.fa-zotero:before{content:"\f309"}.fa-nodejs:before{content:"\f308"}.fa-nextcloud:before{content:"\f306"}.fa-nextcloud-square:before{content:"\f307"}.fa-hackaday:before{content:"\f30a"}.fa-laravel:before{content:"\f30b"}.fa-signalapp:before{content:"\f30c"}.fa-gnupg:before{content:"\f30d"}.fa-php:before{content:"\f30e"}.fa-ffmpeg:before{content:"\f30f"}.fa-joplin:before{content:"\f310"}.fa-syncthing:before{content:"\f311"}.fa-inkscape:before{content:"\f312"}.fa-matrix-org:before{content:"\f313"}.fa-pixelfed:before{content:"\f314"}.fa-bootstrap:before{content:"\f315"}.fa-dev-to:before{content:"\f316"}.fa-hashnode:before{content:"\f317"}.fa-jirafeau:before{content:"\f318"}.fa-emby:before{content:"\f319"}.fa-wikidata:before{content:"\f31a"}.fa-gimp:before{content:"\f31b"}.fa-c:before{content:"\f31c"}.fa-digitalocean:before{content:"\f31d"}.fa-att:before{content:"\f31e"}.fa-gitea:before{content:"\f31f"}.fa-file-epub:before{content:"\f321"}.fa-python:before{content:"\f322"}.fa-archlinux:before{content:"\f323"}.fa-pleroma:before{content:"\f324"}.fa-unsplash:before{content:"\f325"}.fa-hackster:before{content:"\f326"}.fa-spell-check:before{content:"\f327"}.fa-moon:before{content:"\f328"}.fa-sun:before{content:"\f329"}.fa-f-droid:before{content:"\f32a"}.fa-biometric:before{content:"\f32b"}.fa-wire:before{content:"\f32c"}.fa-tor-onion:before{content:"\f32e"}.fa-volume-mute:before{content:"\f32f"}.fa-bell-ringing:before{content:"\f32d"}.fa-bell-ringing-o:before{content:"\f330"}.fa-hal:before{content:"\f333"}.fa-jupyter:before{content:"\f335"}.fa-julia:before{content:"\f334"}.fa-classicpress:before{content:"\f331"}.fa-classicpress-circle:before{content:"\f332"}.fa-open-collective:before{content:"\f336"}.fa-orcid:before{content:"\f337"}.fa-researchgate:before{content:"\f338"}.fa-funkwhale:before{content:"\f339"}.fa-askfm:before{content:"\f33a"}.fa-blockstack:before{content:"\f33b"}.fa-boardgamegeek:before{content:"\f33c"}.fa-bunny:before{content:"\f35f"}.fa-buymeacoffee:before{content:"\f33d"}.fa-cc-by:before{content:"\f33e"}.fa-creative-commons-alt:before,.fa-cc-cc:before{content:"\f33f"}.fa-cc-nc-eu:before{content:"\f341"}.fa-cc-nc-jp:before{content:"\f342"}.fa-cc-nc:before{content:"\f340"}.fa-cc-nd:before{content:"\f343"}.fa-cc-pd:before{content:"\f344"}.fa-cc-remix:before{content:"\f345"}.fa-cc-sa:before{content:"\f346"}.fa-cc-share:before{content:"\f347"}.fa-cc-zero:before{content:"\f348"}.fa-conway-hacker:before,.fa-conway-glider:before{content:"\f349"}.fa-csharp:before{content:"\f34a"}.fa-email-bulk:before{content:"\f34b"}.fa-email-bulk-o:before{content:"\f34c"}.fa-gnu:before{content:"\f34d"}.fa-google-play:before{content:"\f34e"}.fa-heroku:before{content:"\f34f"}.fa-hassio:before,.fa-home-assistant:before{content:"\f350"}.fa-java:before{content:"\f351"}.fa-mariadb:before{content:"\f352"}.fa-markdown:before{content:"\f353"}.fa-mysql:before{content:"\f354"}.fa-nordcast:before{content:"\f355"}.fa-plume:before{content:"\f356"}.fa-postgresql:before{content:"\f357"}.fa-sass-alt:before{content:"\f359"}.fa-sass:before{content:"\f358"}.fa-skate:before{content:"\f35a"}.fa-sketchfab:before{content:"\f35b"}.fa-tex:before{content:"\f35c"}.fa-textpattern:before{content:"\f35d"}.fa-unity:before{content:"\f35e"}.sr-only{position:absolute;width:1px;height:1px;padding:0;margin:-1px;overflow:hidden;clip:rect(0,0,0,0);border:0}.sr-only-focusable:active,.sr-only-focusable:focus{position:static;width:auto;height:auto;margin:0;overflow:visible;clip:auto}*,*:after,*:before{box-sizing:inherit}html{box-sizing:border-box;font-size:62.5%}body{color:#212121;background-color:#fafafa;font-family:-apple-system,BlinkMacSystemFont,segoe ui,Roboto,Oxygen-Sans,Ubuntu,Cantarell,helvetica neue,Helvetica,游ゴシック,pingfang sc,STXihei,华文细黑,microsoft yahei,微软雅黑,SimSun,宋体,Heiti,黑体,sans-serif;font-size:1.8em;font-weight:400;line-height:1.8em}@media only screen and (max-width:768px){body{font-size:1.6em;line-height:1.6em}}iframe[src*=disqus]{color-scheme:light}a{font-weight:500;color:#1565c0;text-decoration:none;transition:all .25s ease-in}a:focus,a:hover{text-decoration:underline}p{margin:2rem 0}h1,h2,h3,h4,h5,h6{font-family:-apple-system,BlinkMacSystemFont,segoe ui,Roboto,Oxygen-Sans,Ubuntu,Cantarell,helvetica neue,Helvetica,游ゴシック,pingfang sc,STXihei,华文细黑,microsoft yahei,微软雅黑,SimSun,宋体,Heiti,黑体,sans-serif;font-weight:600;color:#000;margin:4rem 0 2.5rem}h1:hover .heading-link,h2:hover .heading-link,h3:hover .heading-link,h4:hover .heading-link,h5:hover .heading-link,h6:hover .heading-link{visibility:visible}h1 .heading-link,h2 .heading-link,h3 .heading-link,h4 .heading-link,h5 .heading-link,h6 .heading-link{color:#1565c0;font-weight:inherit;text-decoration:none;font-size:80%;visibility:hidden}h1 .title-link,h2 .title-link,h3 .title-link,h4 .title-link,h5 .title-link,h6 .title-link{color:inherit;font-weight:inherit;text-decoration:none}h1{font-size:3.2rem;line-height:3.6rem}@media only screen and (max-width:768px){h1{font-size:3rem;line-height:3.4rem}}h2{font-size:2.8rem;line-height:3.2rem}@media only screen and (max-width:768px){h2{font-size:2.6rem;line-height:3rem}}h3{font-size:2.4rem;line-height:2.8rem}@media only screen and (max-width:768px){h3{font-size:2.2rem;line-height:2.6rem}}h4{font-size:2.2rem;line-height:2.6rem}@media only screen and (max-width:768px){h4{font-size:2rem;line-height:2.4rem}}h5{font-size:2rem;line-height:2.4rem}@media only screen and (max-width:768px){h5{font-size:1.8rem;line-height:2.2rem}}h6{font-size:1.8rem;line-height:2.2rem}@media only screen and (max-width:768px){h6{font-size:1.6rem;line-height:2rem}}b,strong{font-weight:700}.highlight div,.highlight pre{margin:2rem 0;padding:1rem;border-radius:1rem}pre{display:block;font-family:SFMono-Regular,Consolas,Liberation Mono,Menlo,monospace;font-size:1.6rem;font-weight:400;line-height:2.6rem;overflow-x:auto;margin:2rem 0;padding:1rem;border-radius:1rem}pre code{display:inline-block}code{font-family:SFMono-Regular,Consolas,Liberation Mono,Menlo,monospace;font-size:1.6rem;font-weight:400;border-radius:.6rem;padding:.3rem .6rem}blockquote{border-left:2px solid #e0e0e0;padding-left:2rem;line-height:2.2rem;font-weight:400;font-style:italic}th,td{padding:1.6rem}table{border-collapse:collapse}table td,table th{border:2px solid #000}table tr:first-child th{border-top:0}table tr:last-child td{border-bottom:0}table tr td:first-child,table tr th:first-child{border-left:0}table tr td:last-child,table tr th:last-child{border-right:0}img{max-width:100%}figure{text-align:center}.footnotes ol li p{margin:0}.preload-transitions *{-webkit-transition:none!important;-moz-transition:none!important;-ms-transition:none!important;-o-transition:none!important;transition:none!important}.wrapper{display:flex;flex-direction:column;min-height:100vh;width:100%}.container{margin:1rem auto;max-width:90rem;width:100%;padding-left:2rem;padding-right:2rem}.fab{font-weight:400}.fas{font-weight:700}.float-right{float:right}.float-left{float:left}.fab{font-weight:400}.fas{font-weight:900}.content{flex:1;display:flex;margin-top:1.6rem;margin-bottom:3.2rem}.content header{margin-top:6.4rem;margin-bottom:3.2rem}.content header h1{font-size:4.2rem;line-height:4.6rem;margin:0}@media only screen and (max-width:768px){.content header h1{font-size:4rem;line-height:4.4rem}}.content article a:where(.external-link)::after{content:"⬈"}.content article details summary{cursor:pointer}.content article footer{margin-top:4rem}.content article footer .see-also{margin:3.2rem 0}.content article footer .see-also h3{margin:3.2rem 0}.content article p{text-align:justify;text-justify:auto;hyphens:auto}.content .post .post-title{margin-bottom:.75em}.content .post .post-meta i{text-align:center;width:1.6rem;margin-left:0;margin-right:.5rem}.content .post .post-meta .date .posted-on{margin-left:0;margin-right:1.5rem}.content .post .post-meta .tags .tag{display:inline-block;padding:.3rem .6rem;background-color:#e0e0e0;border-radius:.6rem;line-height:1.4em}.content .post .post-meta .tags .tag a{color:#212121}.content .post .post-meta .tags .tag a:active{color:#212121}.content figure{margin:0;padding:0}.content figcaption p{text-align:center;font-style:italic;font-size:1.6rem;margin:0}.avatar img{width:20rem;height:auto;border-radius:50%}@media only screen and (max-width:768px){.avatar img{width:10rem}}.list ul{margin:3.2rem 0;list-style:none;padding:0}.list ul li{font-size:1.8rem}@media only screen and (max-width:768px){.list ul li{margin:1.6rem 0}}.list ul li .date{display:inline-block;flex:1;width:20rem;text-align:right;margin-right:3rem}@media only screen and (max-width:768px){.list ul li .date{display:block;text-align:left}}.list ul li .title{font-size:1.8rem;flex:2;color:#212121;font-family:-apple-system,BlinkMacSystemFont,segoe ui,Roboto,Oxygen-Sans,Ubuntu,Cantarell,helvetica neue,Helvetica,游ゴシック,pingfang sc,STXihei,华文细黑,microsoft yahei,微软雅黑,SimSun,宋体,Heiti,黑体,sans-serif;font-weight:700}.list ul li .title:hover,.list ul li .title:focus{color:#1565c0}@media only screen and (min-width:768.1px){.list ul:not(.pagination) li{display:flex}}.centered{display:flex;align-items:center;justify-content:center}.centered .about{text-align:center}.centered .about h1{margin-top:2rem;margin-bottom:.5rem}.centered .about h2{margin-top:1rem;margin-bottom:.5rem;font-size:2.4rem}@media only screen and (max-width:768px){.centered .about h2{font-size:2rem}}.centered .about ul{list-style:none;margin:3rem 0 1rem;padding:0}.centered .about ul li{display:inline-block;position:relative}.centered .about ul li a{color:#212121;text-transform:uppercase;margin-left:1rem;margin-right:1rem;font-size:1.6rem}.centered .about ul li a:hover,.centered .about ul li a:focus{color:#1565c0}@media only screen and (max-width:768px){.centered .about ul li a{font-size:1.4rem}}.centered .error{text-align:center}.centered .error h1{margin-top:2rem;margin-bottom:.5rem;font-size:4.6rem}@media only screen and (max-width:768px){.centered .error h1{font-size:3.2rem}}.centered .error h2{margin-top:2rem;margin-bottom:3.2rem;font-size:3.2rem}@media only screen and (max-width:768px){.centered .error h2{font-size:2.8rem}}.notice{border-radius:.2rem;position:relative;margin:2rem 0;padding:0 .75rem;overflow:auto}.notice .notice-title{position:relative;font-weight:700;margin:0 -.75rem;padding:.2rem 3.5rem;border-bottom:1px solid #fafafa}.notice .notice-title i{position:absolute;top:50%;left:1.8rem;transform:translate(-50%,-50%)}.notice .notice-content{display:block;margin:2rem}.notice.note{background-color:#7e57c21a}.notice.note .notice-title{background-color:#673ab71a}.notice.note .notice-title i{color:#5e35b1}.notice.tip{background-color:#26a69a1a}.notice.tip .notice-title{background-color:#0096881a}.notice.tip .notice-title i{color:#00897b}.notice.example{background-color:#8d6e631a}.notice.example .notice-title{background-color:#7955481a}.notice.example .notice-title i{color:#6d4c41}.notice.question{background-color:#9ccc651a}.notice.question .notice-title{background-color:#8bc34a1a}.notice.question .notice-title i{color:#7cb342}.notice.info{background-color:#42a5f51a}.notice.info .notice-title{background-color:#2196f31a}.notice.info .notice-title i{color:#1e88e5}.notice.warning{background-color:#ffca281a}.notice.warning .notice-title{background-color:#ffc1071a}.notice.warning .notice-title i{color:#ffb300}.notice.error{background-color:#ef53501a}.notice.error .notice-title{background-color:#f443361a}.notice.error .notice-title i{color:#e53935}.navigation{height:6rem;width:100%}.navigation a,.navigation span{display:inline;font-size:1.7rem;font-family:-apple-system,BlinkMacSystemFont,segoe ui,Roboto,Oxygen-Sans,Ubuntu,Cantarell,helvetica neue,Helvetica,游ゴシック,pingfang sc,STXihei,华文细黑,microsoft yahei,微软雅黑,SimSun,宋体,Heiti,黑体,sans-serif;font-weight:600;color:#212121}.navigation a:hover,.navigation a:focus{color:#1565c0}.navigation .navigation-title{letter-spacing:.1rem;text-transform:uppercase}.navigation .navigation-list{float:right;list-style:none;margin-bottom:0;margin-top:0}@media only screen and (max-width:768px){.navigation .navigation-list{position:relative;top:2rem;right:0;z-index:5;visibility:hidden;opacity:0;padding:0;max-height:0;width:100%;background-color:#fafafa;border-top:solid 2px #e0e0e0;border-bottom:solid 2px #e0e0e0;transition:opacity .25s,max-height .15s linear}}.navigation .navigation-list .navigation-item{float:left;margin:0;position:relative}@media only screen and (max-width:768px){.navigation .navigation-list .navigation-item{float:none!important;text-align:center}.navigation .navigation-list .navigation-item a,.navigation .navigation-list .navigation-item span{line-height:5rem}}.navigation .navigation-list .navigation-item a,.navigation .navigation-list .navigation-item span{margin-left:1rem;margin-right:1rem}@media only screen and (max-width:768px){.navigation .navigation-list .separator{display:none}}@media only screen and (max-width:768px){.navigation .navigation-list .menu-separator{border-top:2px solid #212121;margin:0 8rem}.navigation .navigation-list .menu-separator span{display:none}}.navigation #dark-mode-toggle{margin:1.7rem 0;font-size:2.4rem;line-height:inherit;bottom:2rem;left:2rem;z-index:100;position:fixed}.navigation #menu-toggle{display:none}@media only screen and (max-width:768px){.navigation #menu-toggle:checked+label>i{color:#e0e0e0}.navigation #menu-toggle:checked+label+ul{visibility:visible;opacity:1;max-height:100rem}}.navigation .menu-button{display:none}@media only screen and (max-width:768px){.navigation .menu-button{position:relative;display:block;font-size:2.4rem;font-weight:400}}.navigation .menu-button i:hover,.navigation .menu-button i:focus{color:#000}.navigation i{color:#212121;cursor:pointer}.navigation i:hover,.navigation i:focus{color:#1565c0}.pagination{margin-top:6rem;text-align:center;font-family:-apple-system,BlinkMacSystemFont,segoe ui,Roboto,Oxygen-Sans,Ubuntu,Cantarell,helvetica neue,Helvetica,游ゴシック,pingfang sc,STXihei,华文细黑,microsoft yahei,微软雅黑,SimSun,宋体,Heiti,黑体,sans-serif}.pagination li{display:inline;text-align:center;font-weight:700}.pagination li span{margin:0;text-align:center;width:3.2rem}.pagination li a{font-weight:300}.pagination li a span{margin:0;text-align:center;width:3.2rem}.tabs{display:flex;flex-wrap:wrap;margin:2rem 0;position:relative}.tabs.tabs-left{justify-content:flex-start}.tabs.tabs-left label.tab-label{margin-right:.5rem}.tabs.tabs-left .tab-content{border-radius:0 4px 4px 4px}.tabs.tabs-right{justify-content:flex-end}.tabs.tabs-right label.tab-label{margin-left:.5rem}.tabs.tabs-right .tab-content{border-radius:4px 0 4px 4px}.tabs input.tab-input{display:none}.tabs label.tab-label{background-color:#e0e0e0;border-color:#ccc;border-radius:4px 4px 0 0;border-style:solid;border-bottom-style:hidden;border-width:1px;cursor:pointer;display:inline-block;order:1;padding:.3rem .6rem;position:relative;top:1px;user-select:none}.tabs input.tab-input:checked+label.tab-label{background-color:#fafafa}.tabs .tab-content{background-color:#fafafa;border-color:#ccc;border-style:solid;border-width:1px;display:none;order:2;padding:1rem;width:100%}.tabs.tabs-code .tab-content{padding:.5rem}.tabs.tabs-code .tab-content pre{margin:0}.taxonomy li{display:inline-block;margin:.9rem}.taxonomy .taxonomy-element{display:block;padding:.3rem .9rem;background-color:#e0e0e0;border-radius:.6rem}.taxonomy .taxonomy-element a{color:#212121}.taxonomy .taxonomy-element a:active{color:#212121}.footer{width:100%;text-align:center;font-size:1.6rem;line-height:2rem;margin-bottom:1rem}.footer a{color:#1565c0}.float-container{bottom:2rem;right:2rem;z-index:100;position:fixed;font-size:1.6em}.float-container a{position:relative;display:inline-block;width:3rem;height:3rem;font-size:2rem;color:#000;background-color:#e0e0e0;border-radius:.2rem;opacity:.5;transition:all .25s ease-in}.float-container a:hover,.float-container a:focus{color:#1565c0;opacity:1}@media only screen and (max-width:768px){.float-container a:hover,.float-container a:focus{color:#000;opacity:.5}}.float-container a i{position:absolute;top:50%;left:50%;transform:translate(-50%,-50%)} \ No newline at end of file diff --git a/fonts/forkawesome-webfont.eot b/fonts/forkawesome-webfont.eot new file mode 100644 index 0000000..c2c24b4 Binary files /dev/null and b/fonts/forkawesome-webfont.eot differ diff --git a/fonts/forkawesome-webfont.svg b/fonts/forkawesome-webfont.svg new file mode 100644 index 0000000..bd45b30 --- /dev/null +++ b/fonts/forkawesome-webfont.svg @@ -0,0 +1,3232 @@ + + + + + +Created by FontForge 20190801 at Fri Aug 27 00:07:49 2021 + By shine +The Fork Awesome font is licensed under the SIL OFL 1.1 (http://scripts.sil.org/OFL). Fork Awesome is a fork based of off Font Awesome 4.7.0 by Dave Gandy. More info on licenses at https://forkawesome.github.io + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + diff --git a/fonts/forkawesome-webfont.ttf b/fonts/forkawesome-webfont.ttf new file mode 100644 index 0000000..1f1d8f3 Binary files /dev/null and b/fonts/forkawesome-webfont.ttf differ diff --git a/fonts/forkawesome-webfont.woff b/fonts/forkawesome-webfont.woff new file mode 100644 index 0000000..cca43af Binary files /dev/null and b/fonts/forkawesome-webfont.woff differ diff --git a/fonts/forkawesome-webfont.woff2 b/fonts/forkawesome-webfont.woff2 new file mode 100644 index 0000000..c96e5bf Binary files /dev/null and b/fonts/forkawesome-webfont.woff2 differ diff --git a/images/avatar.png b/images/avatar.png new file mode 100644 index 0000000..6df5dfe Binary files /dev/null and b/images/avatar.png differ diff --git a/images/elb.png b/images/elb.png new file mode 100644 index 0000000..3a3337c Binary files /dev/null and b/images/elb.png differ diff --git a/images/favicon-16x16.png b/images/favicon-16x16.png new file mode 100644 index 0000000..c0ce306 Binary files /dev/null and b/images/favicon-16x16.png differ diff --git a/images/favicon-32x32.png b/images/favicon-32x32.png new file mode 100644 index 0000000..f2f0316 Binary files /dev/null and b/images/favicon-32x32.png differ diff --git a/images/fuh_project/demo.gif b/images/fuh_project/demo.gif new file mode 100644 index 0000000..15d9ad7 Binary files /dev/null and b/images/fuh_project/demo.gif differ diff --git a/images/infrastructure-flutter/aws_architektur_jenkins.png b/images/infrastructure-flutter/aws_architektur_jenkins.png new file mode 100644 index 0000000..e8cfb2f Binary files /dev/null and b/images/infrastructure-flutter/aws_architektur_jenkins.png differ diff --git a/images/lambda.png b/images/lambda.png new file mode 100644 index 0000000..bcef83a Binary files /dev/null and b/images/lambda.png differ diff --git a/images/memcached.png b/images/memcached.png new file mode 100644 index 0000000..90b5196 Binary files /dev/null and b/images/memcached.png differ diff --git a/images/nginx-proxy/cert.png b/images/nginx-proxy/cert.png new file mode 100644 index 0000000..a57df43 Binary files /dev/null and b/images/nginx-proxy/cert.png differ diff --git a/images/nginx-proxy/communication_wo_proxy.png b/images/nginx-proxy/communication_wo_proxy.png new file mode 100644 index 0000000..1fa24ac Binary files /dev/null and b/images/nginx-proxy/communication_wo_proxy.png differ diff --git a/images/submodule.png b/images/submodule.png new file mode 100644 index 0000000..92cdbe9 Binary files /dev/null and b/images/submodule.png differ diff --git a/images/three_tier.png b/images/three_tier.png new file mode 100644 index 0000000..d75bf56 Binary files /dev/null and b/images/three_tier.png differ diff --git a/index.html b/index.html new file mode 100644 index 0000000..2dc5318 --- /dev/null +++ b/index.html @@ -0,0 +1,4 @@ +Engineering Blog +
    avatar

    Pascal Grundmeier

    DevOps Engineer

    © 2019 - 2024 | Built with ♥️ by Pascal Grundmeier - DevOps engineering at scale.
    \ No newline at end of file diff --git a/index.xml b/index.xml new file mode 100644 index 0000000..c81113c --- /dev/null +++ b/index.xml @@ -0,0 +1,13 @@ +Engineering Bloghttps://pgrunm.github.io/Recent content on Engineering BlogHugoen-usSat, 15 Jul 2023 10:40:30 +0200Automating Kubernetes operating system updates with Kured, kOps and Flatcarhttps://pgrunm.github.io/posts/kured/Sat, 15 Jul 2023 10:40:30 +0200https://pgrunm.github.io/posts/kured/Introduction Link to heading Hello everyone, it’s time for a new post. +As you may know, operating system updates are a crucial part of IT security. In cloud environments you may have up to thousands of virtual servers, where no engineer can manually update these servers. So what to do, if you want to automate these operating system updates? +The solution Link to heading Fortunately, there is a great solution to this problem.Kubernetes templating with Carvel ytthttps://pgrunm.github.io/posts/ytt/Sun, 25 Jun 2023 13:35:33 +0200https://pgrunm.github.io/posts/ytt/Introduction Link to heading Hello again, this is another blog post about a great CNCF tool. If you’ve ever worked with Kubernetes manifests, you probably know that editing or creating them by hand can be very painful. +On the other side, you as a developer or engineer don’t want to edit a lot in these manifests. It is usually better to edit the necessary parts and leave the rest as it was before.Developing Flutter apps with cloud infrastructure: Part 2https://pgrunm.github.io/posts/infrastructure_flutter_part2/Mon, 21 Jun 2021 23:00:00 +0100https://pgrunm.github.io/posts/infrastructure_flutter_part2/Introduction Link to heading Hello again dear reader. This is the 2nd part of the AWS Flutter development series. The first part covered how to create the required infrastructure in AWS with Terraform. This part will cover how the the required Jenkins containers (master/agent) are set up. Let’s dive into it. +Container setups Link to heading Jenkins Master container Link to heading The jenkins master container is the brain of the entire application.Developing Flutter apps with cloud infrastructure: Part 1https://pgrunm.github.io/posts/infrastructure_flutter_part1/Wed, 21 Apr 2021 21:12:43 +0100https://pgrunm.github.io/posts/infrastructure_flutter_part1/Introduction Link to heading Hello again! It’s been a while, because I finally finished my studied and I’m now Bachelor of Science :-). Anyway, I wanted to create a blog post of my bachelor thesis and this is going to be the one. +The topic of my thesis was how to speed up development performance when developing Flutter applications with cloud infrastructure. The infrastrcture was completely created with Terraform in AWS.Establishing proxy support for an application without proxy supporthttps://pgrunm.github.io/posts/nginx_forward_proxy/Tue, 07 Jul 2020 18:45:07 +0100https://pgrunm.github.io/posts/nginx_forward_proxy/Introduction Link to heading Hello again dear reader :-)! Some time passed since my last blog post, because I have been busy with University, but now since exams are done, I have some more time for creating the latest post. Recently I stumbled upon an application that needed internet access but unfortunately didn’t support a proxy server yet. At that point of the project we had to find a way to allow this application to communicate directly with the internet, but without having a direct connection to the internet.GPT and MBR: Moving from MBR to GPThttps://pgrunm.github.io/posts/mbr_and_gpt/Sun, 19 Apr 2020 21:45:07 +0100https://pgrunm.github.io/posts/mbr_and_gpt/Intro Link to heading About a year ago I bought a used hard drive from a colleague of mine. This HDD has a size of 3 TiB and is supposed to hold big files like videos, images and some games that are neigher read nor write intensive. Unfortunately I moved from my previous HDD with a Master Boot Record (MBR) and kept using the MBR. +This turned out to be a problem since MBR doesn’t support partitions larger than 2 TiB so I could not use all of my 3 TiB drive.Setting up the new Raspberry Pi 4 with Ansiblehttps://pgrunm.github.io/posts/raspi4_setup/Sat, 28 Mar 2020 18:45:07 +0100https://pgrunm.github.io/posts/raspi4_setup/Since June 2019 the new Raspberry Pi 4 is available to buy. It features much more memory (up to 4 GiB), a Gigabit Ethernet port and two USB 3 ports. So there is a lot of power to compute with, but before we can start playing with it, we have to set it up. +One more thing to say: I don’t want to manage my Pi by CLI but with Ansible.Using Prometheus for application metricshttps://pgrunm.github.io/posts/prometheus_instrumenting/Mon, 16 Mar 2020 19:33:10 +0100https://pgrunm.github.io/posts/prometheus_instrumenting/One really important aspect of system and application engineering is monitoring. How do you know if your application, script or system is fine? Google for example uses their own software (called Borgmon) for monitoring. The open source software Prometheus allows to capture time-series data in order to monitor different statistics of an application like Borgmon does. Let’s see how we can do this on our own. +The basics of Prometheus Link to heading Some time ago I wrote a small Go application which parses RSS feeds and displays them as a simple html overview with a local webserver.Scaling expriments with different AWS serviceshttps://pgrunm.github.io/posts/aws_scaling_comparison/Wed, 26 Feb 2020 19:33:10 +0100https://pgrunm.github.io/posts/aws_scaling_comparison/As part of my studies I had to write an assigment in the module electronic business. I decided to develop some kind of dummy REST api application where I could try different architectures. The reason for me to try this out was to see how the performance changes over time if you increase the load. +I decided to use Go for this project, because it was designed for scalable cloud architectures and if you compile your code you just get a single binary file which you just have to upload to your machine and execute.Building my new blog: Part 2https://pgrunm.github.io/posts/building_blog_part2/Thu, 13 Feb 2020 14:31:31 +0100https://pgrunm.github.io/posts/building_blog_part2/In the last post I wrote about my considerations about what software to use for my blog, where to host it and how to set it up. This post contains some more techinical details like the git structure and the deployment process. So then let’s dive in. +The git structure Link to heading The hugo projects mentions in their documentation to use a git submodule for the theme. Git explains that you can use this feature to integrate another project into your repository while still getting the latest commits from the other repo.Building my new blog: Part 1https://pgrunm.github.io/posts/building_blog_part1/Sat, 01 Feb 2020 14:31:31 +0100https://pgrunm.github.io/posts/building_blog_part1/I wanted to create a blog for a long time already, but because of university I had not much spare time. Finally I found some time to create my blog and this post will contain some background information about the software I’m using, where it’s hosted etc. Enjoy my first post! +What to use Link to heading The first question I asked myself was: What software I’m going to use for my blog?Contacthttps://pgrunm.github.io/contact/Sat, 01 Feb 2020 14:31:31 +0100https://pgrunm.github.io/contact/Did you find an issue, want to contact me or just want to chat? Feel free to! You can find many ways to contact me on the front page.About mehttps://pgrunm.github.io/about/Sat, 01 Feb 2020 14:30:43 +0100https://pgrunm.github.io/about/Hello, my name is Pascal. I’m currently working as a DevOps Engineer at Dr. Klein. As a DevOps Engineer, I have deep knowledge in distributed systems, network and system engineering and security. I’m currently studying Applied Computer Science since October 2021 at the Fernuniversität Hagen, while working full time. +I’m reading a lot about of DevOps, Site Reliability Engineering, distributed systems and automation. I am very curious so I’m always experimenting and trying out different projects and enjoy to automate things. \ No newline at end of file diff --git a/js/coder.min.6ae284be93d2d19dad1f02b0039508d9aab3180a12a06dcc71b0b0ef7825a317.js b/js/coder.min.6ae284be93d2d19dad1f02b0039508d9aab3180a12a06dcc71b0b0ef7825a317.js new file mode 100644 index 0000000..bbecf34 --- /dev/null +++ b/js/coder.min.6ae284be93d2d19dad1f02b0039508d9aab3180a12a06dcc71b0b0ef7825a317.js @@ -0,0 +1 @@ +const body=document.body,darkModeToggle=document.getElementById("dark-mode-toggle"),darkModeMediaQuery=window.matchMedia("(prefers-color-scheme: dark)");localStorage.getItem("colorscheme")?setTheme(localStorage.getItem("colorscheme")):setTheme(body.classList.contains("colorscheme-light")||body.classList.contains("colorscheme-dark")?body.classList.contains("colorscheme-dark")?"dark":"light":darkModeMediaQuery.matches?"dark":"light"),darkModeToggle&&darkModeToggle.addEventListener("click",()=>{let e=body.classList.contains("colorscheme-dark")?"light":"dark";setTheme(e),rememberTheme(e)}),darkModeMediaQuery.addListener(e=>{setTheme(e.matches?"dark":"light")}),document.addEventListener("DOMContentLoaded",function(){let e=document.querySelector(".preload-transitions");e.classList.remove("preload-transitions")});function setTheme(e){body.classList.remove("colorscheme-auto");let n=e==="dark"?"light":"dark";body.classList.remove("colorscheme-"+n),body.classList.add("colorscheme-"+e),document.documentElement.style["color-scheme"]=e;function t(e){return new Promise(t=>{if(document.querySelector(e))return t(document.querySelector(e));const n=new MutationObserver(s=>{document.querySelector(e)&&(t(document.querySelector(e)),n.disconnect())});n.observe(document.body,{childList:!0,subtree:!0})})}if(e==="dark"){const e={type:"set-theme",theme:"github-dark"};t(".utterances-frame").then(t=>{t.contentWindow.postMessage(e,"https://utteranc.es")})}else{const e={type:"set-theme",theme:"github-light"};t(".utterances-frame").then(t=>{t.contentWindow.postMessage(e,"https://utteranc.es")})}function s(e){const t=document.querySelector("iframe.giscus-frame");if(!t)return;t.contentWindow.postMessage({giscus:e},"https://giscus.app")}s({setConfig:{theme:e}});const o=new Event("themeChanged");document.dispatchEvent(o)}function rememberTheme(e){localStorage.setItem("colorscheme",e)} \ No newline at end of file diff --git a/posts/aws_scaling_comparison/index.html b/posts/aws_scaling_comparison/index.html new file mode 100644 index 0000000..1159f42 --- /dev/null +++ b/posts/aws_scaling_comparison/index.html @@ -0,0 +1,1290 @@ +Scaling expriments with different AWS services · Engineering Blog +

    Scaling expriments with different AWS services

    As part of my studies I had to write an assigment in the module electronic business. I decided to develop some kind of dummy REST api application where I could try different architectures. The reason for me to try this out was to see how the performance changes over time if you increase the load.

    I decided to use Go for this project, because it was designed for scalable cloud architectures and if you compile your code you just get a single binary file which you just have to upload to your machine and execute.

    The load testing tool + +Link to heading

    As I’m also really familar with Python I really enjoy the tool Locust which enables you to stress test your services by simulating a different number of users who access your service by http(s). The best thing about Locust is that it’s all code.

     1
    + 2
    + 3
    + 4
    + 5
    + 6
    + 7
    + 8
    + 9
    +10
    +11
    +12
    +13
    +14
    +15
    +16
    +17
    +
    from locust import HttpLocust, TaskSet, between
    +from random import randrange
    +
    +# Opening the website i.e. http://example.com/customerdata/123
    +# The actual hostname is specified and the Powershell script following.
    +def index(l):
    +    l.client.get(f'/customerdata/{randrange(1, 4500)}')
    +
    +class UserBehavior(TaskSet):
    +    tasks = {index: 1}
    +    # Things to do before doing anything else.
    +
    +
    +class WebsiteUser(HttpLocust):
    +    task_set = UserBehavior
    +    # Definition of the user behavior: Wait at least 5 seconds and maximum 9 seconds.
    +    wait_time = between(5.0, 9.0)
    +

    The Python script handles the logic which url to call and how long to wait. The following Powershell snippet runs Locust to call the web service and create the load.

     1
    + 2
    + 3
    + 4
    + 5
    + 6
    + 7
    + 8
    + 9
    +10
    +11
    +12
    +13
    +14
    +15
    +16
    +17
    +18
    +19
    +20
    +21
    +22
    +23
    +24
    +25
    +26
    +27
    +28
    +29
    +30
    +31
    +32
    +
    param(
    +    # This parameter allows you to enter a hostname just by adding a -Hostname.
    +    $Hostname
    +)
    +
    +# This will be written to the file name.
    +$testCase = "three_tier"
    +
    +# How many users do we want to simulate?
    +$numberOfUsersToSimulate = 50, 100, 200, 400, 800, 1500
    +
    +# AWS Hostname
    +if ($null -eq $Host) {
    +    $Hostname = Read-Host -Prompt "Please enter the AWS Hostname"
    +}
    +
    +foreach ($users in $numberOfUsersToSimulate) {
    +    $testWithUsers = $testCase + "_" + $users
    +
    +    # Executing the Python file.
    +    locust -f .\Locust\Load_Test.py \
    +    --no-web \ # Don't run the web interface
    +    -c $users \ # Simulate x users.
    +    -r 10 \ # Number of created users per second.
    +    --step-load \ # Increase the load in steps.
    +    --step-clients ($users/10) \ # Increase the load by 10 percent every step.
    +    --step-time 15s \ # After 1.5 minutes the load reaches 100%.
    +    -t 3m \ # The performance test runs 3 minutes overall.
    +    --csv=Results/$testWithUsers \ # Save the measured data in a csv file.
    +    --host="http://$($Hostname):10000" \ # This is the hostname on tcp port 10000.
    +    --only-summary # Print only a summary once finished.
    +}
    +

    This is how I created the load on my service.

    Testing different architectures + +Link to heading

    The three tier architecture + +Link to heading

    At the beginning I started with the most basic and well known three tier architecture. This architecture contains the client (a browser), a webserver and a database. As I created this project in Go I just had to use Go’s builtin webserver to create a simple webserver which would serve requests.

    This is an image

    As this is a clound only project the webserver is located on a AWS EC2 micro t.3 instance in Frankfurt. For the database I used an AWS RDS MariaDB instance which I prefer over MySQL. In the following code snippet you can see the internals of this service: everytime the service receives a requests it querys the database and returns the results.

      1
    +  2
    +  3
    +  4
    +  5
    +  6
    +  7
    +  8
    +  9
    + 10
    + 11
    + 12
    + 13
    + 14
    + 15
    + 16
    + 17
    + 18
    + 19
    + 20
    + 21
    + 22
    + 23
    + 24
    + 25
    + 26
    + 27
    + 28
    + 29
    + 30
    + 31
    + 32
    + 33
    + 34
    + 35
    + 36
    + 37
    + 38
    + 39
    + 40
    + 41
    + 42
    + 43
    + 44
    + 45
    + 46
    + 47
    + 48
    + 49
    + 50
    + 51
    + 52
    + 53
    + 54
    + 55
    + 56
    + 57
    + 58
    + 59
    + 60
    + 61
    + 62
    + 63
    + 64
    + 65
    + 66
    + 67
    + 68
    + 69
    + 70
    + 71
    + 72
    + 73
    + 74
    + 75
    + 76
    + 77
    + 78
    + 79
    + 80
    + 81
    + 82
    + 83
    + 84
    + 85
    + 86
    + 87
    + 88
    + 89
    + 90
    + 91
    + 92
    + 93
    + 94
    + 95
    + 96
    + 97
    + 98
    + 99
    +100
    +101
    +102
    +103
    +104
    +105
    +106
    +107
    +108
    +109
    +110
    +111
    +112
    +113
    +114
    +115
    +116
    +117
    +118
    +119
    +120
    +121
    +122
    +123
    +124
    +125
    +126
    +127
    +128
    +129
    +130
    +131
    +132
    +133
    +134
    +135
    +136
    +137
    +138
    +139
    +140
    +141
    +142
    +143
    +144
    +145
    +146
    +147
    +148
    +149
    +150
    +
    package main
    +
    +import (
    +	"database/sql"
    +	"encoding/json"
    +	"fmt"
    +	"log"
    +	"net/http"
    +	"strconv"
    +	"time"
    +
    +	_ "github.com/go-sql-driver/mysql"
    +	"github.com/gorilla/mux"
    +)
    +
    +var (
    +	db  *sql.DB
    +	err error
    +)
    +
    +// Customer - struct for customer data
    +type Customer struct {
    +	ID        int    `json:"Id"`
    +	Surname   string `json:"Surname"`
    +	Givenname string `json:"Givenname"`
    +}
    +
    +// Readings - struct for read data
    +type Readings struct {
    +	MeasureID    int    `json:"MeasureID"`
    +	MeasureDate  string `json:"MeasureDate"`
    +	MeasureValue int    `json:"MeasureValue"`
    +}
    +
    +type myReadings struct {
    +	Measures []Readings
    +}
    +
    +func (reading *myReadings) AddItem(item Readings) {
    +	reading.Measures = append(reading.Measures, item)
    +}
    +
    +func homePage(w http.ResponseWriter, r *http.Request) {
    +	fmt.Fprintf(w, "Welcome to the API!")
    +	fmt.Println("Endpoint Hit: Main API page")
    +}
    +
    +func returnCustomerData(w http.ResponseWriter, r *http.Request) {
    +	vars := mux.Vars(r)
    +	customerID, parseErr := strconv.ParseInt(vars["id"], 10, 32)
    +
    +	if parseErr != nil {
    +		println("dbError while parsing a customer id!")
    +	} else {
    +		// Prepare statement for reading data
    +		stmtOut, dbErr := db.Prepare("SELECT Measure_ID, Measure_Date, Value FROM Readings WHERE Customers_ID_FK = ?;")
    +		if dbErr != nil {
    +			fmt.Println("Error while creating the sql statement")
    +		}
    +		defer stmtOut.Close()
    +
    +		// Query the customer id store it in customerdata
    +
    +		rows, dbErr := stmtOut.Query(customerID)
    +		defer rows.Close()
    +
    +		customReadingsList := myReadings{}
    +		var customerReadings Readings
    +
    +		if dbErr != nil {
    +			fmt.Println("unable to query user data", customerID, dbErr)
    +		} else {
    +			for rows.Next() {
    +
    +				err := rows.Scan(&customerReadings.MeasureID, &customerReadings.MeasureDate, &customerReadings.MeasureValue)
    +				if err != nil {
    +					log.Fatal(err)
    +				}
    +				customReadingsList.AddItem(customerReadings)
    +			}
    +			json.NewEncoder(w).Encode(customReadingsList)
    +		}
    +	}
    +
    +}
    +
    +func returnCustomer(w http.ResponseWriter, r *http.Request) {
    +	vars := mux.Vars(r)
    +	customerID, parseErr := strconv.ParseInt(vars["id"], 10, 32)
    +
    +	if parseErr != nil {
    +		println("dbError while parsing a customer id!")
    +	} else {
    +		// Prepare statement for reading data
    +		stmtOut, dbErr := db.Prepare("SELECT Customers_ID, Surname, Givenname FROM Customers WHERE Customers_ID = ?")
    +		if dbErr != nil {
    +			fmt.Println("Error while creating the sql statement")
    +		}
    +		defer stmtOut.Close()
    +
    +		var customerData Customer // we "scan" the result in here
    +
    +		// Query the customer id store it in customerdata
    +		dbErr = stmtOut.QueryRow(customerID).Scan(&customerData.ID, &customerData.Surname, &customerData.Givenname)
    +		if dbErr != nil {
    +			fmt.Println("unable to query user", customerID, dbErr)
    +		} else {
    +			fmt.Printf("The name of customer %d is: %s %s", customerData.ID, customerData.Givenname, customerData.Surname)
    +
    +			json.NewEncoder(w).Encode(customerData)
    +		}
    +	}
    +
    +}
    +
    +func handleRequests() {
    +	myRouter := mux.NewRouter().StrictSlash(true)
    +	myRouter.HandleFunc("/", homePage)
    +	myRouter.HandleFunc("/customer/{id}", returnCustomer)
    +	myRouter.HandleFunc("/customerdata/{id}", returnCustomerData)
    +
    +	// Needed to disable connection timeouts
    +	srv := &http.Server{
    +		Addr:         ":10000",
    +		ReadTimeout:  5 * time.Second,
    +		WriteTimeout: 10 * time.Second,
    +		Handler:      myRouter,
    +	}
    +
    +	srv.SetKeepAlivesEnabled(false)
    +
    +	log.Fatal(srv.ListenAndServe())
    +}
    +
    +func main() {
    +
    +	db, err = sql.Open("mysql", "admin:admin@tcp(123.4.5.6)/hausarbeit")
    +	if err != nil {
    +		panic(err.Error())
    +	}
    +	defer db.Close()
    +
    +	err = db.Ping()
    +	if err != nil {
    +		panic(err.Error())
    +	} else {
    +		fmt.Println("DB connection established!")
    +	}
    +	handleRequests()
    +}
    +

    This is a pretty basic setup which only took one or two hours to setup which is a real advantadge to me. On the opposite side this setup isn’t very scalable except in a vertical direction.

    When I stressed the service with Locust a little bit the performance at the start was fine, but in the end the webserver wasn’t able to handle the load at all. The error rate went up and requests were not served or had to wait for a long time to get an answer.

    Number of users# requests# failuresMedian responsetimeAverage response timeRequests/sRequests Failed/s
    50822027284.584.58
    1001632027289.089.08
    20032580272818.1518.15
    40064870283136.136.1
    800105897041108458.980.39
    1500130788371100434372.744.66

    The more the load increases the more the performance goes down, which you can see here in the increasing response time. Also the number of failed requests is increasing. The next thing I tried to mitigate this is by implementing a simple cache with AWS Elasticache.

    Implementing a caching layer + +Link to heading

    AWS offers the Elasticache service to increase the performance applications by speeding up database querys for example. Instead of calling the database directly you first look inside the cache, if there is an entry for your request then it’s directly answered from the cache. Otherwise you still need to call the database.

    It can be pretty effective to use a cache as this reduces the load on your database and you may can scale down your RDS database instance which reduces your overall running costs. Another positive effect is the increase in performance you can get by this.

    This is an image

    This picture shows almost the same content as the previous one, except the additional cachie where I used the AWS Elasticache for Memcached. I used Memcached for this as it’s pretty simple to setup. An alternative would be Redis but in my opinion for the simple purpose of caching strings and numbers Memcached would be sufficient.

    I had to modify the code a little bit to the following.

      1
    +  2
    +  3
    +  4
    +  5
    +  6
    +  7
    +  8
    +  9
    + 10
    + 11
    + 12
    + 13
    + 14
    + 15
    + 16
    + 17
    + 18
    + 19
    + 20
    + 21
    + 22
    + 23
    + 24
    + 25
    + 26
    + 27
    + 28
    + 29
    + 30
    + 31
    + 32
    + 33
    + 34
    + 35
    + 36
    + 37
    + 38
    + 39
    + 40
    + 41
    + 42
    + 43
    + 44
    + 45
    + 46
    + 47
    + 48
    + 49
    + 50
    + 51
    + 52
    + 53
    + 54
    + 55
    + 56
    + 57
    + 58
    + 59
    + 60
    + 61
    + 62
    + 63
    + 64
    + 65
    + 66
    + 67
    + 68
    + 69
    + 70
    + 71
    + 72
    + 73
    + 74
    + 75
    + 76
    + 77
    + 78
    + 79
    + 80
    + 81
    + 82
    + 83
    + 84
    + 85
    + 86
    + 87
    + 88
    + 89
    + 90
    + 91
    + 92
    + 93
    + 94
    + 95
    + 96
    + 97
    + 98
    + 99
    +100
    +101
    +102
    +103
    +104
    +105
    +106
    +107
    +108
    +109
    +110
    +111
    +112
    +113
    +114
    +115
    +116
    +117
    +118
    +119
    +120
    +121
    +122
    +123
    +124
    +125
    +126
    +127
    +128
    +129
    +130
    +131
    +132
    +133
    +134
    +135
    +136
    +137
    +138
    +139
    +140
    +141
    +142
    +143
    +144
    +145
    +146
    +147
    +148
    +149
    +150
    +151
    +152
    +153
    +154
    +155
    +156
    +157
    +158
    +159
    +160
    +161
    +162
    +163
    +164
    +165
    +166
    +167
    +168
    +169
    +170
    +171
    +172
    +173
    +174
    +175
    +176
    +177
    +178
    +179
    +180
    +181
    +182
    +183
    +184
    +185
    +186
    +187
    +188
    +189
    +190
    +191
    +192
    +193
    +194
    +195
    +196
    +197
    +198
    +199
    +200
    +201
    +202
    +203
    +204
    +205
    +206
    +207
    +208
    +209
    +210
    +211
    +212
    +213
    +214
    +215
    +216
    +217
    +218
    +219
    +220
    +221
    +222
    +223
    +224
    +225
    +226
    +227
    +228
    +229
    +230
    +231
    +232
    +233
    +234
    +235
    +236
    +237
    +238
    +239
    +240
    +241
    +242
    +243
    +244
    +245
    +246
    +247
    +248
    +249
    +250
    +251
    +252
    +253
    +254
    +255
    +256
    +257
    +258
    +259
    +260
    +261
    +262
    +263
    +264
    +265
    +266
    +267
    +268
    +269
    +270
    +271
    +272
    +273
    +274
    +275
    +276
    +277
    +278
    +279
    +280
    +281
    +282
    +283
    +284
    +285
    +286
    +287
    +
    package main
    +
    +import (
    +	"database/sql"
    +	"encoding/json"
    +	"fmt"
    +	"log"
    +	"net/http"
    +	"strconv"
    +	"time"
    +
    +	"github.com/bradfitz/gomemcache/memcache"
    +
    +	_ "github.com/go-sql-driver/mysql"
    +	"github.com/gorilla/mux"
    +)
    +
    +var (
    +	db  *sql.DB
    +	err error
    +
    +	// Memcached variable
    +	mc = *memcache.New("hausarbeit-eb-memcached.dldis0.cfg.euc1.cache.amazonaws.com:11211")
    +)
    +
    +// Customer - struct for customer data
    +type Customer struct {
    +	ID        int    `json:"Id"`
    +	Surname   string `json:"Surname"`
    +	Givenname string `json:"Givenname"`
    +}
    +
    +// Readings - struct for read data
    +type Readings struct {
    +	MeasureID    int    `json:"MeasureID"`
    +	MeasureDate  string `json:"MeasureDate"`
    +	MeasureValue int    `json:"MeasureValue"`
    +}
    +
    +type myReadings struct {
    +	Measures []Readings
    +}
    +
    +func (reading *myReadings) AddItem(item Readings) {
    +	reading.Measures = append(reading.Measures, item)
    +}
    +
    +func homePage(w http.ResponseWriter, r *http.Request) {
    +	fmt.Fprintf(w, "Welcome to the API!")
    +	fmt.Println("Endpoint Hit: Main API page")
    +}
    +
    +func returnCustomerData(w http.ResponseWriter, r *http.Request) {
    +	vars := mux.Vars(r)
    +	customerID, parseErr := strconv.ParseInt(vars["id"], 10, 32)
    +
    +	if parseErr != nil {
    +		println("dbError while parsing a customer id!")
    +	} else {
    +		// Try reading the data from the memcached server
    +		key := fmt.Sprintf("customerReadings_id_%d", customerID)
    +
    +		// mc.Get(&memcache.Item{Key: key, Value: []byte(b)})
    +		it, memErr := mc.Get(key)
    +		if memErr != nil {
    +			fmt.Printf("No data for customer id %d in memcached: %s", customerID, memErr)
    +
    +			// Prepare statement for reading data
    +			stmtOut, dbErr := db.Prepare("SELECT Measure_ID, Measure_Date, Value FROM Readings WHERE Customers_ID_FK = ?;")
    +			if dbErr != nil {
    +				fmt.Println("Error while creating the sql statement")
    +			}
    +			defer stmtOut.Close()
    +
    +			// Query the customer id store it in customerdata
    +
    +			rows, dbErr := stmtOut.Query(customerID)
    +			defer rows.Close()
    +
    +			customReadingsList := myReadings{}
    +			var customerReadings Readings
    +
    +			if dbErr != nil {
    +				fmt.Println("unable to query user data", customerID, dbErr)
    +			} else {
    +				for rows.Next() {
    +
    +					err := rows.Scan(&customerReadings.MeasureID, &customerReadings.MeasureDate, &customerReadings.MeasureValue)
    +					if err != nil {
    +						log.Fatal(err)
    +					}
    +					customReadingsList.AddItem(customerReadings)
    +				}
    +				json.NewEncoder(w).Encode(customReadingsList)
    +			}
    +		} else {
    +			// Output the memcached data.
    +			w.Write(it.Value)
    +		}
    +
    +	}
    +
    +}
    +
    +func returnCustomer(w http.ResponseWriter, r *http.Request) {
    +	vars := mux.Vars(r)
    +	customerID, parseErr := strconv.ParseInt(vars["id"], 10, 32)
    +
    +	if parseErr != nil {
    +		println("dbError while parsing a customer id!")
    +	} else {
    +		// Prepare statement for reading data
    +		stmtOut, dbErr := db.Prepare("SELECT Customers_ID, Surname, Givenname FROM Customers WHERE Customers_ID = ?")
    +		if dbErr != nil {
    +			fmt.Println("Error while creating the sql statement")
    +		}
    +		defer stmtOut.Close()
    +
    +		var customerData Customer // we "scan" the result in here
    +
    +		// Query the customer id store it in customerdata
    +		dbErr = stmtOut.QueryRow(customerID).Scan(&customerData.ID, &customerData.Surname, &customerData.Givenname)
    +		if dbErr != nil {
    +			fmt.Println("unable to query user", customerID, dbErr)
    +		} else {
    +			fmt.Printf("The name of customer %d is: %s %s", customerData.ID, customerData.Givenname, customerData.Surname)
    +
    +			json.NewEncoder(w).Encode(customerData)
    +		}
    +	}
    +
    +}
    +
    +func handleRequests() {
    +	myRouter := mux.NewRouter().StrictSlash(true)
    +	myRouter.HandleFunc("/", homePage)
    +	myRouter.HandleFunc("/customer/{id}", returnCustomer)
    +	myRouter.HandleFunc("/customerdata/{id}", returnCustomerData)
    +
    +	// Needed to disable connection timeouts
    +	srv := &http.Server{
    +		Addr:         ":10000",
    +		ReadTimeout:  5 * time.Second,
    +		WriteTimeout: 10 * time.Second,
    +		Handler:      myRouter,
    +	}
    +
    +	srv.SetKeepAlivesEnabled(false)
    +
    +	log.Fatal(srv.ListenAndServe())
    +}
    +
    +func loadCustomerDataIntoMemory() {
    +	// Load the customer data into cached
    +	stmtOut, dbErr := db.Prepare("SELECT * FROM Customers;")
    +	if dbErr != nil {
    +		fmt.Println("Error while creating the sql statement")
    +	}
    +	defer stmtOut.Close()
    +
    +	// Query the customer id store it in customerdata
    +	rows, dbErr := stmtOut.Query()
    +	defer rows.Close()
    +
    +	var customerData Customer // we "scan" the result in here
    +
    +	if dbErr != nil {
    +		fmt.Println("unable to load user into memcached", dbErr)
    +	} else {
    +
    +		for rows.Next() {
    +			// ID, Surname, givenname
    +			err := rows.Scan(&customerData.ID, &customerData.Surname, &customerData.Givenname)
    +			if err != nil {
    +				fmt.Println("unable to parse user row into memcached", err)
    +			}
    +
    +			b, err := json.Marshal(customerData)
    +			if err != nil {
    +				fmt.Println(err)
    +				continue
    +			}
    +			// Format the key and
    +			key := fmt.Sprintf("customerData_id_%d", customerData.ID)
    +			//  Save the data to memcached servers
    +			mc.Set(&memcache.Item{Key: key, Value: []byte(b)})
    +		}
    +		fmt.Println("Finished cache creation for customer data.")
    +	}
    +
    +}
    +
    +func loadReadingsDataIntoMemory() {
    +	// Load the readings data into cache
    +
    +	// Does not work, we need this user by user
    +
    +	stmtOut, dbErr := db.Prepare("SELECT DISTINCT Customers_ID_FK FROM Readings;")
    +	if dbErr != nil {
    +		fmt.Println("Error while creating the sql statement")
    +	}
    +	defer stmtOut.Close()
    +
    +	// Query the customer id store it in customerdata
    +	rows, dbErr := stmtOut.Query()
    +	defer rows.Close()
    +
    +	if dbErr != nil {
    +		fmt.Println("unable to query user ids for caching memcached", dbErr)
    +	} else {
    +		// Load the user ids
    +		for rows.Next() {
    +			customReadingsList := myReadings{}
    +			var customerReadings Readings
    +			var customerID int
    +
    +			err := rows.Scan(&customerID)
    +			if err != nil {
    +				// log.Fatal(err)
    +				fmt.Println("Could not parse user id")
    +				continue
    +			}
    +
    +			stmtOut, dbErr := db.Prepare("SELECT Measure_ID, Measure_Date, Value FROM Readings where Customers_ID_FK = ?;")
    +			if dbErr != nil {
    +				fmt.Println("Error while creating the sql statement")
    +			}
    +			defer stmtOut.Close()
    +
    +			// Query the customer id store it in customerdata
    +			readingRows, dbErr := stmtOut.Query(customerID)
    +			defer rows.Close()
    +
    +			if dbErr != nil {
    +				fmt.Println("unable to query user ids for caching memcached", dbErr)
    +			} else {
    +				for readingRows.Next() {
    +					// IDFK, MeasureID, Date, Value
    +					err := readingRows.Scan(&customerReadings.MeasureID, &customerReadings.MeasureDate, &customerReadings.MeasureValue)
    +					if err != nil {
    +						log.Fatal(err)
    +					}
    +					customReadingsList.AddItem(customerReadings)
    +				}
    +				// Save the loaded data to memcached by converting it to json
    +				b, err := json.Marshal(customReadingsList)
    +				if err != nil {
    +					fmt.Println(err)
    +					continue
    +				}
    +				// Format the key and
    +				key := fmt.Sprintf("customerReadings_id_%d", customerID)
    +				//  Save the data to memcached servers
    +				mc.Set(&memcache.Item{Key: key, Value: []byte(b)})
    +			}
    +		}
    +	}
    +	fmt.Println("Finished cache creation for customer readings.")
    +}
    +
    +func initSetup() {
    +
    +	db, err = sql.Open("mysql", "admin:admin@tcp(123.4.5.6)/hausarbeit")
    +	if err != nil {
    +		panic(err.Error())
    +	}
    +	defer db.Close()
    +
    +	err = db.Ping()
    +	if err != nil {
    +		panic(err.Error())
    +	} else {
    +		fmt.Println("DB connection established!")
    +	}
    +
    +	// Load all from the db into memcached
    +	loadCustomerDataIntoMemory()
    +	loadReadingsDataIntoMemory()
    +
    +	// Print when ready to serve
    +	fmt.Println("Ready to serve traffic...")
    +}
    +
    +func main() {
    +	initSetup()
    +	handleRequests()
    +}
    +

    The code now works as following:

    1. At the beginning the cache is created directly from the database.
    2. As soon as all entries from the database are available in the cache the webserver starts and the application is ready to server traffic.
    3. Every time you call a specific url like https://example.org/customerdata/12345 the appropiate url handler is called.
    4. If you try to receiver customer data for example the program tries to find an entry for the given key inside the cache returns it to you. If it doesn’t find a value inside the cache it queries the database.

    As database load can increase the more user you have, caching is a good strategy to decrease the response time.

    Number of users# requests# failuresMedian responsetimeAverage response timeRequests/sRequests Failed/s
    50824023234,590
    1001637124239,110,01
    20032662252518,180,01
    40064773263136,050,02
    80011112423167861,830,23
    1500134098041200416274,64,47

    As you can see in the table above not only average response time decreased but also the number of failed requests per second decreased. So the cache makes it a little bit better but in my opinion there is still air upward.

    Scaling out with a load balancer and multiple processes + +Link to heading

    As you saw in the last table I tried the best to increase the load the example application could handle. As the three tier architecture’s performance is limited, we may need to scale out a little bit. To do this I added an Elastic Loadbalancer to distribute the load about different processes. Actually I cheated a little bit, because I just started several instances of the same program, where each is listening to a different port.

    This image shows how the load is distributed about the processes with using the round robin algorithm.

    This is an image

    The webservers are all listening on different ports (i.e 10000-10005). For this kind of load balancing I used an application load balancer as AWS mentions you can use it to distribute http(s) connections to multiple ports on the same instance. This is a really great feature, because with this we can build the setup like in the picture above.

    This enables us to spread the load over 5 different instances of the same service which are all answering requests. Only the port has to be changed and as I’m a lazy engineer I did this by adding a commandline parameter:

    1
    +2
    +3
    +4
    +5
    +6
    +7
    +8
    +
    func main() {
    +    // Port flag
    +	portPtr := flag.Int("port", 10000, "Port for the webserver to start")
    +	flag.Parse()
    +
    +	initSetup()
    +	handleRequests(*portPtr)
    +}
    +

    The default port is still tcp port 10000, but you can of course enter any other port. You can find the full source code available here on github. The rest is still the same.

    This leads us to the performance results I measured for this architecture:

    Number of users# requests# failuresMedian responsetimeAverage response timeRequests/sRequests Failed/s
    50826014164,60
    1001648014169,180
    20032710141518,220
    40065040141636,210
    800127060141670,720
    15002282901419126,850

    Not only the number of answered requests increased also number of failures and the average response went down. There were no failures anymore. As you can see scaling out is a very effective strategy to increase the performance of your services.

    Scaling it to the maximum with serverless functions + +Link to heading

    Now I wanted to scale out my little application to the limit with serverless functions and AWS lambda. Lambda enables you to run just the code you need by passing it to AWS. In my case I used Lambda with an API gateway where you would just call an url and this would trigger the Lambda function.

    The architecture changed like shown in this diagram:

    This is an image

    The length of the source code decreased heavily and looks now like this:

      1
    +  2
    +  3
    +  4
    +  5
    +  6
    +  7
    +  8
    +  9
    + 10
    + 11
    + 12
    + 13
    + 14
    + 15
    + 16
    + 17
    + 18
    + 19
    + 20
    + 21
    + 22
    + 23
    + 24
    + 25
    + 26
    + 27
    + 28
    + 29
    + 30
    + 31
    + 32
    + 33
    + 34
    + 35
    + 36
    + 37
    + 38
    + 39
    + 40
    + 41
    + 42
    + 43
    + 44
    + 45
    + 46
    + 47
    + 48
    + 49
    + 50
    + 51
    + 52
    + 53
    + 54
    + 55
    + 56
    + 57
    + 58
    + 59
    + 60
    + 61
    + 62
    + 63
    + 64
    + 65
    + 66
    + 67
    + 68
    + 69
    + 70
    + 71
    + 72
    + 73
    + 74
    + 75
    + 76
    + 77
    + 78
    + 79
    + 80
    + 81
    + 82
    + 83
    + 84
    + 85
    + 86
    + 87
    + 88
    + 89
    + 90
    + 91
    + 92
    + 93
    + 94
    + 95
    + 96
    + 97
    + 98
    + 99
    +100
    +101
    +102
    +103
    +104
    +105
    +106
    +107
    +108
    +109
    +110
    +111
    +112
    +113
    +114
    +115
    +116
    +117
    +118
    +119
    +120
    +121
    +122
    +123
    +124
    +125
    +126
    +127
    +128
    +129
    +130
    +131
    +132
    +133
    +134
    +135
    +136
    +137
    +
    package main
    +
    +import (
    +	"database/sql"
    +	"encoding/json"
    +	"fmt"
    +	"log"
    +	"net/http"
    +	"regexp"
    +
    +	"github.com/aws/aws-lambda-go/events"
    +	"github.com/aws/aws-lambda-go/lambda"
    +	"github.com/bradfitz/gomemcache/memcache"
    +	_ "github.com/go-sql-driver/mysql"
    +)
    +
    +var (
    +	db  *sql.DB
    +	err error
    +
    +	// Memcached variable
    +	mc = *memcache.New("hausarbeit-eb-memcached.dldis0.cfg.euc1.cache.amazonaws.com:11211")
    +)
    +
    +// Customer - struct for customer data
    +type Customer struct {
    +	ID        int    `json:"Id"`
    +	Surname   string `json:"Surname"`
    +	Givenname string `json:"Givenname"`
    +}
    +
    +// Readings - struct for read data
    +type Readings struct {
    +	MeasureID    int    `json:"MeasureID"`
    +	MeasureDate  string `json:"MeasureDate"`
    +	MeasureValue int    `json:"MeasureValue"`
    +}
    +
    +type myReadings struct {
    +	Measures []Readings
    +}
    +
    +func (reading *myReadings) AddItem(item Readings) {
    +	reading.Measures = append(reading.Measures, item)
    +}
    +
    +func clientError(status int) (events.APIGatewayProxyResponse, error) {
    +	return events.APIGatewayProxyResponse{
    +		StatusCode: status,
    +		Body:       http.StatusText(status),
    +	}, nil
    +}
    +
    +func returnCustomerData(req events.APIGatewayProxyRequest) (events.APIGatewayProxyResponse, error) {
    +	// ID may only contain numbers
    +	var idRegExp = regexp.MustCompile(`[0-9]`)
    +
    +	// Parse the id from the query string
    +	ID := req.QueryStringParameters["id"]
    +
    +	// Check if the provided ID is valid
    +	if !idRegExp.MatchString(ID) {
    +		return clientError(http.StatusBadRequest)
    +	}
    +	key := fmt.Sprintf("customerReadings_id_%s", ID)
    +
    +	it, memErr := mc.Get(key)
    +	if memErr != nil {
    +		fmt.Printf("No data for customer id %s in memcached: %s", ID, memErr)
    +
    +		// Create a db connection
    +		initSetup()
    +
    +		// Prepare statement for reading data
    +		stmtOut, dbErr := db.Prepare("SELECT Measure_ID, Measure_Date, Value FROM Readings WHERE Customers_ID_FK = ?;")
    +		if dbErr != nil {
    +			fmt.Println("Error while creating the sql statement")
    +		}
    +		defer stmtOut.Close()
    +
    +		// Query the customer id store it in customerdata
    +		rows, dbErr := stmtOut.Query(ID)
    +		defer rows.Close()
    +
    +		customReadingsList := myReadings{}
    +		var customerReadings Readings
    +
    +		if dbErr != nil {
    +			fmt.Println("unable to query user data", ID, dbErr)
    +		} else {
    +			for rows.Next() {
    +
    +				err := rows.Scan(&customerReadings.MeasureID, &customerReadings.MeasureDate, &customerReadings.MeasureValue)
    +				if err != nil {
    +					log.Fatal(err)
    +				}
    +				customReadingsList.AddItem(customerReadings)
    +			}
    +			json, err := json.Marshal(customReadingsList)
    +			if err != nil {
    +				return events.APIGatewayProxyResponse{
    +					StatusCode: http.StatusOK,
    +					Body:       string(json),
    +				}, nil
    +			}
    +		}
    +	}
    +
    +	// Return the events and a http 200 code.
    +	return events.APIGatewayProxyResponse{
    +		StatusCode: http.StatusOK,
    +		Body:       string(it.Value),
    +	}, nil
    +
    +}
    +
    +func initSetup() {
    +
    +	db, err = sql.Open("mysql", "admin:admin@tcp(123.4.5.6)/example")
    +	if err != nil {
    +		panic(err.Error())
    +	}
    +	defer db.Close()
    +
    +	err = db.Ping()
    +	if err != nil {
    +		panic(err.Error())
    +	} else {
    +		fmt.Println("DB connection established!")
    +	}
    +
    +}
    +
    +func main() {
    +	// Start the Lambda Handler
    +	lambda.Start(returnCustomerData)
    +}
    +

    As you can see I’m now parsing the id which is submitted as parameter with a regular expression. If it’s valid the programm tries to get the data from the cache. If there is no data inside the cache it queries the database.

    Now finally the results fo the lambda performance tests:

    Number of users# requests# failuresMedian responsetimeAverage response timeRequests/sRequests Failed/s
    50818027314,560
    1001652027319,20
    20032730273118,20
    40065040263136,140
    800127010273370,410
    15002289802736126,190

    Like in the previous table there are no failures at all. The number of responses is also almost the same. The average response time increased a little bit in comparison with the previous architecture.

    Summary + +Link to heading

    In the end I can say it’s the best to use an Elastic Load Balancer to be able to distribute the load across different nodes or at least different ports on the same node. Using an ELB is a good idea, because you don’t have to change the endpoint your users are calling. If there would be no ELB you probably would have to move the dns name of your endpoint (like www.example.org to an ELB).

    The ELB and AWS Lambda architecture are almost even up. But if it’s about scalability we have to take into account that the ELB architecture is still running on only one EC2 instance. This means the machine’s capacity isn’t endless and at some point you can’t scale this architecture anymore.

    The Lambda architecture instead scales automatically as load increases. On the other hand the Lambda architecture is harder to set up and to debug. But you’ll get an almost infinitely scalable piece of architecture from this. It’s up to you to decide what fits better.

    I hope you enjoyed my article, feel free to contact me if you have any feedback or suggestions.

    © 2019 - 2024 | Built with ♥️ by Pascal Grundmeier - DevOps engineering at scale.
    \ No newline at end of file diff --git a/posts/building_blog_part1/index.html b/posts/building_blog_part1/index.html new file mode 100644 index 0000000..2511cc3 --- /dev/null +++ b/posts/building_blog_part1/index.html @@ -0,0 +1,19 @@ +Building my new blog: Part 1 · Engineering Blog +

    Building my new blog: Part 1

    I wanted to create a blog for a long time already, but because of university I had not much spare time. Finally I found some time to create my blog and this post will contain some background information about the software I’m using, where it’s hosted etc. Enjoy my first post!

    What to use + +Link to heading

    The first question I asked myself was: What software I’m going to use for my blog? Maybe Wordpress or Joomla for example? Lately you can read a lot about critcal vulnerabilities in Wordpress or rarely maintained plugins. So this wasn’t an option for me, because security is a must have.

    Recently I played some Terraria with my girlfriend on the same computer by using an open source software called Universal Split Screen. I really liked the clear design of the page which is apparently hosted by Github pages. I read a little bit of the background and found out that you can easily create and host static sites this technology. Unfortunately there was one problem: This is based on Jekyll which uses Ruby as programming language. As I’m already familiar with Python, Go and some other languages I wasn’t interested in learning Ruby just for this project.

    Fortunately I found Hugo which uses Google’s Go as programming languages and enables you to create pages from markdown. This is really great, because markdown is super simple and Hugo became my favorite for the project.

    Where to host + +Link to heading

    The next question I asked myself: Where do I want to host my new blog? I wanted to have it quick and simple (you can call it KISS). As mentioned earlier, if you have a Github account you can create a personal page to host static pages. I never used this feature before but always wanted to, Github also explains this really good and the Hugo project also explains you how to use Github pages.

    This question was easy to answer.

    How to set up the website + +Link to heading

    I mainly used the Hugo documentation like the quick start guide, the usage guide

    Another important page is the configuration overview, where Hugo tells you what you can enable or disable with the config file. The hardest part was of course designing and creating the content like the home page or this first post :-).

    I hope you enjoyed the first part, in the next part I’m going to tell you a bit more how I’m publishing new posts and the git structure I’m using.

    © 2019 - 2024 | Built with ♥️ by Pascal Grundmeier - DevOps engineering at scale.
    \ No newline at end of file diff --git a/posts/building_blog_part2/index.html b/posts/building_blog_part2/index.html new file mode 100644 index 0000000..7a53983 --- /dev/null +++ b/posts/building_blog_part2/index.html @@ -0,0 +1,81 @@ +Building my new blog: Part 2 · Engineering Blog +

    Building my new blog: Part 2

    In the last post I wrote about my considerations about what software to use for my blog, where to host it and how to set it up. This post contains some more techinical details like the git structure and the deployment process. So then let’s dive in.

    The git structure + +Link to heading

    The hugo projects mentions in their documentation to use a git submodule for the theme. Git explains that you can use this feature to integrate another project into your repository while still getting the latest commits from the other repo.

    This is an image

    Image 1: An illustration of a git repository with an embedded submodule (source: Own image)

    By doing I can fetch the latest changes made to the hugo theme I used to avoid security issues or get the latest features just by running a simple git pull. The best thing is that you can user submodules in both directions: push and pull.

    Automating the deployment process + +Link to heading

    As good administrators are usually lazy, which means the automate recurring tasks, you can create another submodule to easily publish the latest commits directly to github pages. For the automatic deployment to github the hugo project also tells you how to do this. You just have to add another submodule to your github pages repository. As soon as you’ve finished this it’s pretty straightforward.

     1
    + 2
    + 3
    + 4
    + 5
    + 6
    + 7
    + 8
    + 9
    +10
    +11
    +12
    +13
    +14
    +15
    +16
    +17
    +18
    +19
    +20
    +21
    +22
    +23
    +24
    +25
    +26
    +27
    +28
    +29
    +30
    +
    #!/bin/sh
    +
    +# If a command fails then the deploy stops
    +set -e
    +
    +printf "\033[0;32mDeploying updates to GitHub...\033[0m\n"
    +
    +# Build the project.
    +# If you use a theme add quotation marks nasty errors when creating your pages.
    +./hugo -t "hugo-coder"
    +
    +# Go To Public folder
    +cd public
    +
    +# Add changes to git.
    +git add .
    +
    +# Read a commit message in
    +read -p "Please enter a commit message: " msg
    +
    +# Commit changes.
    +if [ -n "$*" ]; then
    +    msg="$*"
    +fi
    +git commit -m "$msg"
    +
    +# Finally: push source and build repos.
    +git push origin master
    +
    +# From here on your changes to github pages are live so that you can view them.
    +

    So as soon as I run this bash script my latest changes are directly live on github and so also on the website. Pretty simple isn’t it? This enables me to continously update my blog just by running a simple shell script.

    © 2019 - 2024 | Built with ♥️ by Pascal Grundmeier - DevOps engineering at scale.
    \ No newline at end of file diff --git a/posts/index.html b/posts/index.html new file mode 100644 index 0000000..a35d37c --- /dev/null +++ b/posts/index.html @@ -0,0 +1,15 @@ +Posts · Engineering Blog +

    Posts

    © 2019 - 2024 | Built with ♥️ by Pascal Grundmeier - DevOps engineering at scale.
    \ No newline at end of file diff --git a/posts/index.xml b/posts/index.xml new file mode 100644 index 0000000..79970d8 --- /dev/null +++ b/posts/index.xml @@ -0,0 +1,12 @@ +Posts on Engineering Bloghttps://pgrunm.github.io/posts/Recent content in Posts on Engineering BlogHugoen-usSat, 15 Jul 2023 10:40:30 +0200Automating Kubernetes operating system updates with Kured, kOps and Flatcarhttps://pgrunm.github.io/posts/kured/Sat, 15 Jul 2023 10:40:30 +0200https://pgrunm.github.io/posts/kured/Introduction Link to heading Hello everyone, it’s time for a new post. +As you may know, operating system updates are a crucial part of IT security. In cloud environments you may have up to thousands of virtual servers, where no engineer can manually update these servers. So what to do, if you want to automate these operating system updates? +The solution Link to heading Fortunately, there is a great solution to this problem.Kubernetes templating with Carvel ytthttps://pgrunm.github.io/posts/ytt/Sun, 25 Jun 2023 13:35:33 +0200https://pgrunm.github.io/posts/ytt/Introduction Link to heading Hello again, this is another blog post about a great CNCF tool. If you’ve ever worked with Kubernetes manifests, you probably know that editing or creating them by hand can be very painful. +On the other side, you as a developer or engineer don’t want to edit a lot in these manifests. It is usually better to edit the necessary parts and leave the rest as it was before.Developing Flutter apps with cloud infrastructure: Part 2https://pgrunm.github.io/posts/infrastructure_flutter_part2/Mon, 21 Jun 2021 23:00:00 +0100https://pgrunm.github.io/posts/infrastructure_flutter_part2/Introduction Link to heading Hello again dear reader. This is the 2nd part of the AWS Flutter development series. The first part covered how to create the required infrastructure in AWS with Terraform. This part will cover how the the required Jenkins containers (master/agent) are set up. Let’s dive into it. +Container setups Link to heading Jenkins Master container Link to heading The jenkins master container is the brain of the entire application.Developing Flutter apps with cloud infrastructure: Part 1https://pgrunm.github.io/posts/infrastructure_flutter_part1/Wed, 21 Apr 2021 21:12:43 +0100https://pgrunm.github.io/posts/infrastructure_flutter_part1/Introduction Link to heading Hello again! It’s been a while, because I finally finished my studied and I’m now Bachelor of Science :-). Anyway, I wanted to create a blog post of my bachelor thesis and this is going to be the one. +The topic of my thesis was how to speed up development performance when developing Flutter applications with cloud infrastructure. The infrastrcture was completely created with Terraform in AWS.Establishing proxy support for an application without proxy supporthttps://pgrunm.github.io/posts/nginx_forward_proxy/Tue, 07 Jul 2020 18:45:07 +0100https://pgrunm.github.io/posts/nginx_forward_proxy/Introduction Link to heading Hello again dear reader :-)! Some time passed since my last blog post, because I have been busy with University, but now since exams are done, I have some more time for creating the latest post. Recently I stumbled upon an application that needed internet access but unfortunately didn’t support a proxy server yet. At that point of the project we had to find a way to allow this application to communicate directly with the internet, but without having a direct connection to the internet.GPT and MBR: Moving from MBR to GPThttps://pgrunm.github.io/posts/mbr_and_gpt/Sun, 19 Apr 2020 21:45:07 +0100https://pgrunm.github.io/posts/mbr_and_gpt/Intro Link to heading About a year ago I bought a used hard drive from a colleague of mine. This HDD has a size of 3 TiB and is supposed to hold big files like videos, images and some games that are neigher read nor write intensive. Unfortunately I moved from my previous HDD with a Master Boot Record (MBR) and kept using the MBR. +This turned out to be a problem since MBR doesn’t support partitions larger than 2 TiB so I could not use all of my 3 TiB drive.Setting up the new Raspberry Pi 4 with Ansiblehttps://pgrunm.github.io/posts/raspi4_setup/Sat, 28 Mar 2020 18:45:07 +0100https://pgrunm.github.io/posts/raspi4_setup/Since June 2019 the new Raspberry Pi 4 is available to buy. It features much more memory (up to 4 GiB), a Gigabit Ethernet port and two USB 3 ports. So there is a lot of power to compute with, but before we can start playing with it, we have to set it up. +One more thing to say: I don’t want to manage my Pi by CLI but with Ansible.Using Prometheus for application metricshttps://pgrunm.github.io/posts/prometheus_instrumenting/Mon, 16 Mar 2020 19:33:10 +0100https://pgrunm.github.io/posts/prometheus_instrumenting/One really important aspect of system and application engineering is monitoring. How do you know if your application, script or system is fine? Google for example uses their own software (called Borgmon) for monitoring. The open source software Prometheus allows to capture time-series data in order to monitor different statistics of an application like Borgmon does. Let’s see how we can do this on our own. +The basics of Prometheus Link to heading Some time ago I wrote a small Go application which parses RSS feeds and displays them as a simple html overview with a local webserver.Scaling expriments with different AWS serviceshttps://pgrunm.github.io/posts/aws_scaling_comparison/Wed, 26 Feb 2020 19:33:10 +0100https://pgrunm.github.io/posts/aws_scaling_comparison/As part of my studies I had to write an assigment in the module electronic business. I decided to develop some kind of dummy REST api application where I could try different architectures. The reason for me to try this out was to see how the performance changes over time if you increase the load. +I decided to use Go for this project, because it was designed for scalable cloud architectures and if you compile your code you just get a single binary file which you just have to upload to your machine and execute.Building my new blog: Part 2https://pgrunm.github.io/posts/building_blog_part2/Thu, 13 Feb 2020 14:31:31 +0100https://pgrunm.github.io/posts/building_blog_part2/In the last post I wrote about my considerations about what software to use for my blog, where to host it and how to set it up. This post contains some more techinical details like the git structure and the deployment process. So then let’s dive in. +The git structure Link to heading The hugo projects mentions in their documentation to use a git submodule for the theme. Git explains that you can use this feature to integrate another project into your repository while still getting the latest commits from the other repo.Building my new blog: Part 1https://pgrunm.github.io/posts/building_blog_part1/Sat, 01 Feb 2020 14:31:31 +0100https://pgrunm.github.io/posts/building_blog_part1/I wanted to create a blog for a long time already, but because of university I had not much spare time. Finally I found some time to create my blog and this post will contain some background information about the software I’m using, where it’s hosted etc. Enjoy my first post! +What to use Link to heading The first question I asked myself was: What software I’m going to use for my blog? \ No newline at end of file diff --git a/posts/infrastructure_flutter_part1/index.html b/posts/infrastructure_flutter_part1/index.html new file mode 100644 index 0000000..2dfbd9c --- /dev/null +++ b/posts/infrastructure_flutter_part1/index.html @@ -0,0 +1,1532 @@ +Developing Flutter apps with cloud infrastructure: Part 1 · Engineering Blog +

    Developing Flutter apps with cloud infrastructure: Part 1

    Introduction + +Link to heading

    Hello again! It’s been a while, because I finally finished my studied and I’m now Bachelor of Science :-). Anyway, I wanted to create a blog post of my bachelor thesis and this is going to be the one.

    The topic of my thesis was how to speed up development performance when developing Flutter applications with cloud infrastructure. The infrastrcture was completely created with Terraform in AWS. The architecture itself is based on a sample published by AWS itself. It consists of:

    • an internet gateway (for access from public of course)
    • an application loadbalancer (which I replaced by an Elastic Loadbalancer)
    • NAT gateways for network address translation of public to private IP adresses and
    • the Jenkins leader as well as the agents of course

    As this is of course the current state of the architecture, this can be changed anytime, because everything was completely created using Terraform.

    This is an image

    This image shows a simplified view of the architecture itself. It’s created over two AZs (eu-central-1a and eu-central-1b) in this example. The AZs can be configured within the Terraform config. It’s even possible to distribute the infrastructure about more than only two AZs.

    Provisioning of the cloud infrastructure + +Link to heading

    Let’s start with the provisioning of the AWS infrastructure. As I appreciate automation a lot I did not build any infrastructure by hand. Instead I used Terraform to create every necessary piece of infrastructure, from the VPC until the Jenkins leader. The most interesting parts as of now are the main file and the variables.

    Terraform main file + +Link to heading

    The content below is inside the main.tf file. It basically created all necessary security groups, subnets as well as the Elastic Container Service.

     1
    + 2
    + 3
    + 4
    + 5
    + 6
    + 7
    + 8
    + 9
    +10
    +11
    +12
    +13
    +14
    +15
    +16
    +17
    +18
    +19
    +20
    +21
    +22
    +23
    +24
    +25
    +26
    +27
    +28
    +29
    +30
    +31
    +32
    +33
    +34
    +35
    +36
    +37
    +38
    +39
    +40
    +41
    +42
    +43
    +44
    +45
    +46
    +47
    +48
    +49
    +50
    +51
    +52
    +53
    +54
    +55
    +56
    +57
    +58
    +59
    +60
    +
    provider "aws" {
    +  region = "eu-central-1"
    +}
    +
    +# We need a cluster in which to put our service.
    +resource "aws_ecs_cluster" "JenkinsThesisAwsDev" {
    +  name = var.application_name
    +}
    +
    +# Log groups hold logs from our app.
    +resource "aws_cloudwatch_log_group" "JenkinsThesisAwsDev" {
    +  name = "/ecs/${var.application_name}"
    +  # Delete Logs after 7 days
    +  retention_in_days = 7
    +
    +  # Write the environment into tags
    +  tags = {
    +    "Environment" = var.environment
    +  }
    +}
    +
    +# The main service.
    +resource "aws_ecs_service" "JenkinsThesisAwsDev" {
    +  name            = "ecs_service_${var.application_name}"
    +  task_definition = aws_ecs_task_definition.jenkins_master.arn
    +  cluster         = aws_ecs_cluster.JenkinsThesisAwsDev.id
    +  launch_type     = "FARGATE"
    +
    +  # Require service version 1.4.0!
    +  platform_version = "1.4.0"
    +
    +  desired_count = 1
    +
    +  # Register the master and the port in dns
    +  service_registries {
    +    registry_arn = aws_service_discovery_service.jenkins_master.arn
    +    port         = 50000
    +  }
    +
    +  load_balancer {
    +    target_group_arn = aws_lb_target_group.jenkins.arn
    +    container_name   = "jenkins_master"
    +    container_port   = 8080
    +  }
    +
    +  network_configuration {
    +    assign_public_ip = false
    +
    +    security_groups = [
    +      aws_security_group.outbound.id,
    +      aws_security_group.efs_jenkins_security_group.id,
    +      aws_security_group.jenkins.id
    +    ]
    +
    +    subnets = [
    +      aws_subnet.private[0].id,
    +      aws_subnet.private[1].id,
    +    ]
    +  }
    +}
    +

    The region is also configured within this file. As you can see, we run this on Fargate, because this is easier. The other option would be to use EC2 machines, but this is not necessary.

    Some other important points are

    • the configuration of the Fargate version 1.4.0, because otherwise you cannot mount storage into the containers
    • the DNS registration of the master and the agent
    • the configuration of the Elastic loadbalancer to redirect any incoming traffic to the Jenkins master container on TCP port 8080

    Configuration of variables + +Link to heading

    The tags and variables are populated inside the variables.tf file, as listed here:

     1
    + 2
    + 3
    + 4
    + 5
    + 6
    + 7
    + 8
    + 9
    +10
    +11
    +12
    +13
    +14
    +15
    +16
    +17
    +18
    +19
    +20
    +21
    +22
    +23
    +24
    +25
    +26
    +27
    +28
    +29
    +30
    +31
    +32
    +33
    +34
    +35
    +36
    +37
    +38
    +39
    +40
    +41
    +42
    +43
    +44
    +45
    +46
    +47
    +48
    +49
    +50
    +51
    +52
    +53
    +54
    +55
    +56
    +57
    +58
    +59
    +60
    +
    # Type of environment e. g. dev or prod
    +variable "environment" {
    +  description = "The name to use for the environment, used in Names etc."
    +  type        = string
    +  default     = "dev"
    +}
    +
    +# Name of the Application
    +variable "application_name" {
    +  description = "The name of the application"
    +  type        = string
    +  default     = "jenkins_flutter_thesis"
    +}
    +
    +# Output of the current region
    +data "aws_region" "current" {}
    +
    +# Name of the Admin Account
    +variable "jenkins_accountname" {
    +  description = "The Jenkins Master Account"
    +  type        = string
    +  default     = "developer"
    +}
    +
    +# Create a random password for the first login of the administrator account.
    +resource "random_string" "jenkins_pass" {
    +  length           = 20
    +  special          = true
    +  override_special = "/@\" "
    +}
    +
    +variable "master_memory_amount" {
    +  default     = 1024
    +  description = "Soft RAM Limit for Jenkins Agent"
    +}
    +
    +variable "master_cpu_amount" {
    +  default     = 512
    +  description = "Soft CPU Limit for Jenkins Agent"
    +}
    +
    +variable "agent_memory_amount" {
    +  default     = 4096
    +  description = "Soft RAM Limit for Jenkins Agent"
    +}
    +
    +variable "agent_cpu_amount" {
    +  default     = 2048
    +  description = "Soft CPU Limit for Jenkins Agent"
    +}
    +
    +variable "s3_artifact_bucket_name" {
    +  default     = "jenkins-flutter-artifact-bucket"
    +  description = "Default Name for S3 Bucket where Jenkins stores artifacts"
    +}
    +
    +variable "s3_folder_prefix_name" {
    +  default     = "jenkins_artifacts"
    +  description = "Default folder prefix for folder in the S3 Bucket where Jenkins stores artifacts"
    +}
    +

    Just to mention the most important points:

    • Any tags as well as application name comes from this file, it can be configured accordingly
    • the administrator’s username and password are configured inside this file. This is only for demo purposes, never do this in a production environment!
    • CPU and memory limits are configured here
    • Name and folder prefix for the S3 bucket, where artifacts are stored

    There are of course even more Terraform files, like the ones that create IAM policies, ECS task definitions for Jenkins master and agent or an S3 bucket for storing artifcats.

    The most interesting file is probably the network.tf file, as it contains the most details about the network structure:

      1
    +  2
    +  3
    +  4
    +  5
    +  6
    +  7
    +  8
    +  9
    + 10
    + 11
    + 12
    + 13
    + 14
    + 15
    + 16
    + 17
    + 18
    + 19
    + 20
    + 21
    + 22
    + 23
    + 24
    + 25
    + 26
    + 27
    + 28
    + 29
    + 30
    + 31
    + 32
    + 33
    + 34
    + 35
    + 36
    + 37
    + 38
    + 39
    + 40
    + 41
    + 42
    + 43
    + 44
    + 45
    + 46
    + 47
    + 48
    + 49
    + 50
    + 51
    + 52
    + 53
    + 54
    + 55
    + 56
    + 57
    + 58
    + 59
    + 60
    + 61
    + 62
    + 63
    + 64
    + 65
    + 66
    + 67
    + 68
    + 69
    + 70
    + 71
    + 72
    + 73
    + 74
    + 75
    + 76
    + 77
    + 78
    + 79
    + 80
    + 81
    + 82
    + 83
    + 84
    + 85
    + 86
    + 87
    + 88
    + 89
    + 90
    + 91
    + 92
    + 93
    + 94
    + 95
    + 96
    + 97
    + 98
    + 99
    +100
    +101
    +102
    +103
    +104
    +105
    +106
    +107
    +108
    +109
    +110
    +111
    +112
    +113
    +114
    +115
    +116
    +117
    +118
    +119
    +120
    +121
    +122
    +123
    +124
    +125
    +126
    +127
    +128
    +129
    +130
    +131
    +132
    +133
    +134
    +135
    +136
    +137
    +138
    +139
    +140
    +141
    +142
    +143
    +144
    +145
    +146
    +147
    +148
    +149
    +150
    +151
    +152
    +153
    +154
    +155
    +156
    +157
    +158
    +159
    +160
    +161
    +162
    +163
    +164
    +165
    +166
    +167
    +168
    +169
    +170
    +171
    +172
    +173
    +174
    +175
    +176
    +177
    +178
    +179
    +180
    +181
    +182
    +183
    +184
    +185
    +186
    +187
    +188
    +189
    +190
    +191
    +192
    +193
    +194
    +195
    +196
    +197
    +198
    +199
    +200
    +201
    +202
    +203
    +204
    +205
    +206
    +207
    +208
    +209
    +210
    +211
    +212
    +213
    +214
    +215
    +216
    +217
    +218
    +219
    +220
    +221
    +222
    +223
    +224
    +225
    +226
    +227
    +228
    +229
    +230
    +231
    +232
    +233
    +234
    +235
    +236
    +237
    +238
    +239
    +240
    +241
    +242
    +243
    +244
    +245
    +246
    +247
    +248
    +249
    +250
    +251
    +252
    +253
    +254
    +255
    +256
    +257
    +258
    +259
    +260
    +261
    +262
    +263
    +264
    +
    # network.tf
    +resource "aws_vpc" "app-vpc" {
    +  cidr_block           = "10.0.0.0/16"
    +  enable_dns_hostnames = true
    +  enable_dns_support   = true
    +}
    +
    +resource "aws_subnet" "public" {
    +  vpc_id            = aws_vpc.app-vpc.id
    +  count             = length(var.public_subnets)
    +  cidr_block        = var.public_subnets[count.index]
    +  availability_zone = var.azs[count.index]
    +}
    +
    +# Internet GW
    +resource "aws_internet_gateway" "gw" {
    +  vpc_id = aws_vpc.app-vpc.id
    +
    +  tags = {
    +    Name        = var.application_name,
    +    Environment = var.environment
    +  }
    +}
    +
    +resource "aws_route_table" "route" {
    +  vpc_id = aws_vpc.app-vpc.id
    +  route {
    +    cidr_block = "0.0.0.0/0"
    +    gateway_id = aws_internet_gateway.gw.id
    +  }
    +  tags = {
    +    Name = "Gatewayroute for ${var.application_name}: ${var.environment} environment"
    +  }
    +}
    +
    +resource "aws_route_table_association" "public" {
    +  subnet_id      = aws_subnet.public[count.index].id
    +  route_table_id = aws_route_table.route.id
    +  count          = length(var.public_subnets)
    +}
    +
    +# Private Subnet
    +resource "aws_subnet" "private" {
    +  vpc_id            = aws_vpc.app-vpc.id
    +  count             = length(var.private_subnets)
    +  cidr_block        = var.private_subnets[count.index]
    +  availability_zone = var.azs[count.index]
    +}
    +
    +# NAT Stuff
    +
    +# Elastic IP for NAT
    +resource "aws_eip" "nat" {
    +  vpc   = true
    +  count = 2
    +}
    +
    +resource "aws_nat_gateway" "ngw" {
    +  subnet_id     = aws_subnet.public[count.index].id
    +  allocation_id = aws_eip.nat[count.index].id
    +  count         = length(var.public_subnets)
    +  depends_on    = [aws_internet_gateway.gw]
    +}
    +
    +# Routing
    +
    +resource "aws_route_table" "public" {
    +  vpc_id = aws_vpc.app-vpc.id
    +  count  = length(var.public_subnets)
    +  tags = {
    +    "Name" = "Route Table for Public Subnet ${count.index}"
    +  }
    +}
    +
    +resource "aws_route_table" "private" {
    +  vpc_id = aws_vpc.app-vpc.id
    +  count  = length(var.private_subnets)
    +  tags = {
    +    "Name" = "Route Table for Private Subnet ${count.index}"
    +  }
    +}
    +
    +# Routing Table Association
    +resource "aws_route_table_association" "public_subnet" {
    +  subnet_id      = aws_subnet.public[count.index].id
    +  count          = length(var.public_subnets)
    +  route_table_id = aws_route_table.public[count.index].id
    +}
    +
    +resource "aws_route_table_association" "private_subnet" {
    +  subnet_id      = aws_subnet.private[count.index].id
    +  count          = length(var.private_subnets)
    +  route_table_id = aws_route_table.private[count.index].id
    +}
    +
    +# Creating the Network Routes
    +resource "aws_route" "public_igw" {
    +  count                  = length(var.public_subnets)
    +  gateway_id             = aws_internet_gateway.gw.id
    +  route_table_id         = aws_route_table.public[count.index].id
    +  destination_cidr_block = "0.0.0.0/0"
    +}
    +
    +resource "aws_route" "private_ngw" {
    +  count                  = length(var.private_subnets)
    +  nat_gateway_id         = aws_nat_gateway.ngw[count.index].id
    +  route_table_id         = aws_route_table.private[count.index].id
    +  destination_cidr_block = "0.0.0.0/0"
    +}
    +
    +resource "aws_security_group" "https" {
    +  name        = "Incoming HTTP"
    +  description = "HTTP and HTTPS traffic for ${var.environment} environment of ${var.application_name}"
    +  vpc_id      = aws_vpc.app-vpc.id
    +
    +  # HTTP
    +  ingress {
    +    from_port   = 80
    +    to_port     = 80
    +    protocol    = "TCP"
    +    cidr_blocks = ["0.0.0.0/0"]
    +  }
    +  # HTTPS
    +  ingress {
    +    from_port   = 443
    +    to_port     = 443
    +    protocol    = "TCP"
    +    cidr_blocks = ["0.0.0.0/0"]
    +  }
    +  egress {
    +    description = "Outbound TCP Connections to Jenkins Master"
    +    from_port   = 8080
    +    protocol    = "TCP"
    +    to_port     = 8080
    +    cidr_blocks = ["0.0.0.0/0"]
    +  }
    +}
    +
    +resource "aws_security_group" "jenkins" {
    +  name        = "Jenkins Master"
    +  description = "Allows traffic to Jenkins Master."
    +  vpc_id      = aws_vpc.app-vpc.id
    +  # HTTP Alternative
    +  ingress {
    +    from_port       = 8080
    +    to_port         = 8080
    +    protocol        = "TCP"
    +    cidr_blocks     = ["0.0.0.0/0"]
    +    security_groups = [aws_security_group.https.id]
    +  }
    +  ingress {
    +    from_port   = 50000
    +    to_port     = 50000
    +    protocol    = "TCP"
    +    cidr_blocks = ["0.0.0.0/0"]
    +  }
    +}
    +
    +resource "aws_security_group" "jenkins_agent" {
    +  name        = "Jenkins Agents"
    +  description = "Allows traffic to Jenkins Agents."
    +  vpc_id      = aws_vpc.app-vpc.id
    +
    +  # Allow Incoming Traffic on JLNP Port -> 50000
    +  ingress {
    +    description = "Allows JLNP Traffic"
    +    from_port   = 50000
    +    protocol    = "tcp"
    +    self        = true
    +    to_port     = 50000
    +  }
    +
    +  tags = {
    +    "Environment" = var.environment
    +    "Application" = var.application_name
    +  }
    +}
    +
    +resource "aws_security_group" "outbound" {
    +  name        = "Outbound Traffic"
    +  description = "Allow any outbound traffic for ${var.environment} environment of ${var.application_name}"
    +  vpc_id      = aws_vpc.app-vpc.id
    +
    +  # Any Outbound connections allowing
    +  egress {
    +    from_port   = 0
    +    to_port     = 0
    +    protocol    = "-1"
    +    cidr_blocks = ["0.0.0.0/0"]
    +  }
    +}
    +
    +# DNS Resolution for local zone
    +resource "aws_service_discovery_private_dns_namespace" "jenkins_zone" {
    +  name        = "jenkins.local"
    +  description = "DNS Resolution for ${var.application_name}: ${var.environment} environment"
    +  vpc         = aws_vpc.app-vpc.id
    +}
    +
    +# Load Balancer
    +resource "aws_lb_target_group" "jenkins" {
    +  name                 = "Jenkins"
    +  port                 = 8080
    +  protocol             = "HTTP"
    +  target_type          = "ip"
    +  vpc_id               = aws_vpc.app-vpc.id
    +  deregistration_delay = 10
    +
    +  health_check {
    +    enabled = true
    +    path    = "/login"
    +    port    = "8080"
    +  }
    +
    +  depends_on = [aws_alb.jenkins]
    +}
    +
    +resource "aws_alb" "jenkins" {
    +  name               = "Jenkins"
    +  internal           = false
    +  load_balancer_type = "application"
    +
    +  subnets = [
    +    aws_subnet.public[0].id,
    +    aws_subnet.public[1].id,
    +  ]
    +
    +  security_groups = [
    +    aws_security_group.https.id,
    +  ]
    +  depends_on = [aws_internet_gateway.gw]
    +}
    +
    +resource "aws_alb_listener" "jenkins_listener" {
    +  load_balancer_arn = aws_alb.jenkins.arn
    +  port              = "80"
    +  protocol          = "HTTP"
    +
    +  default_action {
    +    type             = "forward"
    +    target_group_arn = aws_lb_target_group.jenkins.arn
    +  }
    +}
    +
    +resource "aws_security_group" "jenkins_agent" {
    +  name        = "Jenkins Agents"
    +  description = "Allows traffic to Jenkins Agents."
    +  vpc_id      = aws_vpc.jenkins-vpc.id
    +
    +  # Allow Incoming Traffic on JLNP Port -> TCP 50000
    +  ingress {
    +    description = "Allows JLNP Traffic"
    +    from_port   = 50000
    +    protocol    = "tcp"
    +    self        = true
    +    to_port     = 50000
    +  }
    +
    +  # Add Tags to the security group, coming from variables.tf
    +  tags = {
    +    "Environment" = var.environment
    +    "Application" = var.application_name
    +  }
    +}
    +

    As you can see this file contains a lot of information, what is created by Terraform if you apply the configuration. Long story short:

    • create a new VPC, new public and private subets (according to the variables) and assign them routing tables
    • assign Elastic IPs to the NAT gateways
    • create security groups for incoming and outgoing traffic
    • create a DNS service for name resolution
    • start an ELB with listener within the two public subnets with a healthcheck

    Now there are still some files missing like the Jenkins master file.

    Jenkins Master + +Link to heading

    The Jenkins master controls every action done by the agents. If any task needs to be scheduled, the master will start a new ECS container agent. The master is configured like this:

      1
    +  2
    +  3
    +  4
    +  5
    +  6
    +  7
    +  8
    +  9
    + 10
    + 11
    + 12
    + 13
    + 14
    + 15
    + 16
    + 17
    + 18
    + 19
    + 20
    + 21
    + 22
    + 23
    + 24
    + 25
    + 26
    + 27
    + 28
    + 29
    + 30
    + 31
    + 32
    + 33
    + 34
    + 35
    + 36
    + 37
    + 38
    + 39
    + 40
    + 41
    + 42
    + 43
    + 44
    + 45
    + 46
    + 47
    + 48
    + 49
    + 50
    + 51
    + 52
    + 53
    + 54
    + 55
    + 56
    + 57
    + 58
    + 59
    + 60
    + 61
    + 62
    + 63
    + 64
    + 65
    + 66
    + 67
    + 68
    + 69
    + 70
    + 71
    + 72
    + 73
    + 74
    + 75
    + 76
    + 77
    + 78
    + 79
    + 80
    + 81
    + 82
    + 83
    + 84
    + 85
    + 86
    + 87
    + 88
    + 89
    + 90
    + 91
    + 92
    + 93
    + 94
    + 95
    + 96
    + 97
    + 98
    + 99
    +100
    +101
    +102
    +103
    +104
    +105
    +106
    +107
    +108
    +109
    +110
    +111
    +112
    +113
    +114
    +115
    +116
    +117
    +118
    +119
    +120
    +121
    +122
    +123
    +124
    +125
    +126
    +127
    +128
    +129
    +130
    +131
    +132
    +133
    +134
    +135
    +136
    +137
    +138
    +139
    +140
    +141
    +142
    +143
    +144
    +145
    +146
    +
    # The task definition the Jenkins Master Container
    +resource "aws_ecs_task_definition" "jenkins_master" {
    +  family = "JenkinsThesisAwsDev"
    +
    +  container_definitions = <<EOF
    +[
    +    {
    +        "name": "jenkins_master",
    +        "image": "falconone/jenkins_thesis:latest",
    +        "portMappings": [
    +            {
    +                "containerPort": 8080,
    +                "hostPort": 8080
    +            },
    +            {
    +                "containerPort": 50000,
    +                "hostPort": 50000
    +            }
    +        ],
    +        "environment": [
    +            {
    +                "name": "EXECUTION_ROLE_ARN",
    +                "value": "${aws_iam_role.JenkinsThesisAwsDev-task-execution-role.arn}"
    +            },
    +            {
    +                "name": "SECURITY_GROUP_IDS",
    +                "value": "${aws_security_group.jenkins_agent.id},${aws_security_group.outbound.id},${aws_security_group.efs_jenkins_security_group.id}"
    +            },
    +            {
    +                "name": "AWS_REGION_NAME",
    +                "value": "${data.aws_region.current.name}"
    +            },
    +            {
    +                "name": "ECS_CLUSTER_NAME",
    +                "value": "${aws_ecs_cluster.JenkinsThesisAwsDev.name}"
    +            },
    +            {
    +                "name": "JENKINS_URL",
    +                "value": "http://${aws_service_discovery_service.jenkins_master.name}.${aws_service_discovery_private_dns_namespace.jenkins_zone.name}:8080"
    +            },
    +            {
    +                "name": "LOG_GROUP_NAME",
    +                "value": "/ecs/${var.application_name}"
    +            },
    +            {
    +                "name": "IMAGE_NAME",
    +                "value": "falconone/jenkins-flutter:latest"
    +            },
    +            {
    +                "name": "LOCAL_JENKINS_URL",
    +                "value": ""
    +            },
    +            {
    +                "name": "SUBNETS",
    +                "value": "${aws_subnet.public.id}"
    +            },
    +            {
    +                "name": "CPU_AMOUNT",
    +                "value": "${var.agent_cpu_amount}"
    +            },
    +            {
    +                "name": "MEMORY_AMOUNT",
    +                "value": "${var.agent_memory_amount}"
    +            },
    +            {
    +                "name": "PLATFORM_VERSION",
    +                "value": "1.4.0"
    +            },
    +            {
    +                "name": "BUCKET_NAME",
    +                "value": "${var.s3_artifact_bucket_name}"
    +            },
    +            {
    +                "name": "S3_FOLDER_PREFIX",
    +                "value": "${var.s3_folder_prefix_name}"
    +            },
    +            {
    +                "name": "JENKINS_ADMIN_USERNAME",
    +                "value": "${var.jenkins_accountname}"
    +            },
    +            {
    +                "name": "JENKINS_ADMIN_PASSWORD",
    +                "value": "${random_string.jenkins_pass.result}"
    +            }
    +        ],
    +        "logConfiguration": {
    +            "logDriver": "awslogs",
    +            "options": {
    +                "awslogs-region": "eu-central-1",
    +                "awslogs-group": "/ecs/${var.application_name}",
    +                "awslogs-stream-prefix": "ecs"
    +            }
    +        }
    +    }
    +]
    +EOF
    +  # See here: https://stackoverflow.com/a/49947471
    +  # https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/iam_role
    +  execution_role_arn = aws_iam_role.JenkinsThesisAwsDev-task-execution-role.arn
    +  task_role_arn      = aws_iam_role.JenkinsThesisAwsDev-task-execution-role.arn
    +
    +  # Memory and CPU values for Jenkins Master, can be adjusted in variables.tf
    +  cpu                      = var.master_memory_amount
    +  memory                   = var.master_cpu_amount
    +  requires_compatibilities = ["FARGATE"]
    +
    +  # requirement for AWS ECS Fargate containers
    +  network_mode = "awsvpc"
    +
    +  depends_on = [aws_efs_mount_target.jenkins_master_home]
    +
    +  # Storage options for jenkins_home
    +  volume {
    +    name = "service-storage"
    +
    +    efs_volume_configuration {
    +      file_system_id          = aws_efs_file_system.jenkins_master_home.id
    +      root_directory          = "/var/jenkins_home"
    +      transit_encryption      = "ENABLED"
    +      transit_encryption_port = 2999
    +      authorization_config {
    +        access_point_id = aws_efs_access_point.jenkins_master_home.id
    +        # Necessary for other resources like S3, EFS or AWS SSM!
    +        iam = "ENABLED"
    +      }
    +    }
    +  }
    +}
    +
    +# DNS Resolution for Jenkins Master
    +resource "aws_service_discovery_service" "jenkins_master" {
    +  name = "master"
    +  dns_config {
    +    namespace_id = aws_service_discovery_private_dns_namespace.jenkins_zone.id
    +
    +    dns_records {
    +      ttl  = 60
    +      type = "A"
    +    }
    +    dns_records {
    +      ttl  = 60
    +      type = "SRV"
    +    }
    +    routing_policy = "MULTIVALUE"
    +  }
    +}
    +

    Creating the storage with Terraform + +Link to heading

    There is also the storage.tf file, which creates all storage related stuff, like the S3 bucket or an EFS access point to mount storage inside the agents:

     1
    + 2
    + 3
    + 4
    + 5
    + 6
    + 7
    + 8
    + 9
    +10
    +11
    +12
    +13
    +14
    +15
    +16
    +17
    +18
    +19
    +20
    +21
    +22
    +23
    +24
    +25
    +26
    +27
    +28
    +29
    +30
    +31
    +32
    +33
    +34
    +35
    +36
    +37
    +38
    +39
    +40
    +41
    +42
    +43
    +44
    +45
    +46
    +47
    +48
    +49
    +50
    +51
    +52
    +53
    +54
    +55
    +56
    +57
    +58
    +59
    +60
    +61
    +62
    +63
    +
    resource "aws_security_group" "efs_jenkins_security_group" {
    +  name        = "efs_access"
    +  description = "Allows efs access from jenkins master to efs storage on port 2049 for ${var.environment} environment."
    +  vpc_id      = aws_vpc.jenkins-vpc.id
    +
    +  #   EFS default port
    +  ingress {
    +    description = "EFS access"
    +    from_port   = 2049
    +    to_port     = 2049
    +    protocol    = "tcp"
    +    # security_groups = [aws_security_group.efs_jenkins_security_group.id]
    +    # Self is required to allow access for this group on EFS storage
    +    self = "true"
    +  }
    +}
    +
    +# Create the EFS Storage
    +resource "aws_efs_file_system" "jenkins_master_home" {
    +  creation_token = "jenkins_master"
    +
    +  tags = {
    +    Name        = "jenkins_master"
    +    environment = var.environment
    +  }
    +}
    +
    +# Create the EFS Mount Target
    +resource "aws_efs_mount_target" "jenkins_master_home" {
    +  file_system_id  = aws_efs_file_system.jenkins_master_home.id
    +  subnet_id       = aws_subnet.public.id
    +  security_groups = [aws_security_group.efs_jenkins_security_group.id]
    +}
    +
    +resource "aws_efs_access_point" "jenkins_master_home" {
    +  file_system_id = aws_efs_file_system.jenkins_master_home.id
    +  posix_user {
    +    uid = 1000
    +    gid = 1000
    +  }
    +
    +  root_directory {
    +    path = "/jenkins-home"
    +
    +    # Create the path with this rights, if it does not exist.
    +    creation_info {
    +      owner_gid   = 1000
    +      owner_uid   = 1000
    +      permissions = 755
    +    }
    +  }
    +}
    +
    +# S3 Bucket for Artifact Storage
    +resource "aws_s3_bucket" "jenkins_artifact_storage" {
    +  bucket = var.s3_artifact_bucket_name
    +  acl    = "private"
    +
    +  tags = {
    +    Name        = var.s3_artifact_bucket_name
    +    Environment = var.environment
    +  }
    +}
    +

    Necessary policies + +Link to heading

    As the master starts an agent if necessary, the master needs the right to run new ECS containers. This and even more is configured inside the iam.tf file.

      1
    +  2
    +  3
    +  4
    +  5
    +  6
    +  7
    +  8
    +  9
    + 10
    + 11
    + 12
    + 13
    + 14
    + 15
    + 16
    + 17
    + 18
    + 19
    + 20
    + 21
    + 22
    + 23
    + 24
    + 25
    + 26
    + 27
    + 28
    + 29
    + 30
    + 31
    + 32
    + 33
    + 34
    + 35
    + 36
    + 37
    + 38
    + 39
    + 40
    + 41
    + 42
    + 43
    + 44
    + 45
    + 46
    + 47
    + 48
    + 49
    + 50
    + 51
    + 52
    + 53
    + 54
    + 55
    + 56
    + 57
    + 58
    + 59
    + 60
    + 61
    + 62
    + 63
    + 64
    + 65
    + 66
    + 67
    + 68
    + 69
    + 70
    + 71
    + 72
    + 73
    + 74
    + 75
    + 76
    + 77
    + 78
    + 79
    + 80
    + 81
    + 82
    + 83
    + 84
    + 85
    + 86
    + 87
    + 88
    + 89
    + 90
    + 91
    + 92
    + 93
    + 94
    + 95
    + 96
    + 97
    + 98
    + 99
    +100
    +101
    +102
    +103
    +104
    +105
    +106
    +107
    +108
    +109
    +110
    +111
    +112
    +113
    +114
    +115
    +116
    +117
    +118
    +119
    +120
    +121
    +122
    +123
    +124
    +125
    +126
    +127
    +128
    +129
    +130
    +131
    +132
    +133
    +134
    +135
    +136
    +137
    +138
    +139
    +140
    +141
    +142
    +143
    +144
    +145
    +146
    +147
    +148
    +149
    +150
    +151
    +152
    +153
    +154
    +155
    +156
    +157
    +
    resource "aws_iam_role" "JenkinsThesisAwsDev-task-execution-role" {
    +  name               = "${var.application_name}-task-execution-role"
    +  assume_role_policy = data.aws_iam_policy_document.ecs-task-assume-role.json
    +}
    +
    +data "aws_iam_policy_document" "ecs-task-assume-role" {
    +  statement {
    +    actions = ["sts:AssumeRole"]
    +    effect  = "Allow"
    +    principals {
    +      type        = "Service"
    +      identifiers = ["ecs-tasks.amazonaws.com"]
    +    }
    +  }
    +}
    +
    +data "aws_iam_policy" "ecs-task-execution-role" {
    +  arn = "arn:aws:iam::aws:policy/service-role/AmazonECSTaskExecutionRolePolicy"
    +}
    +
    +# Attach the above policy to the execution role.
    +resource "aws_iam_role_policy_attachment" "ecs-task-execution-role-default" {
    +  role       = aws_iam_role.JenkinsThesisAwsDev-task-execution-role.name
    +  policy_arn = data.aws_iam_policy.ecs-task-execution-role.arn
    +}
    +
    +
    +# Attach the required permissions to the Jenkins Task
    +resource "aws_iam_role_policy_attachment" "ecs-task-execution-role-jenkins" {
    +  role       = aws_iam_role.JenkinsThesisAwsDev-task-execution-role.name
    +  policy_arn = aws_iam_policy.jenkins_agents.arn
    +}
    +
    +# Data Policy for Jenkins Master to start new Jenkins Agents
    +# https://stackoverflow.com/questions/62831874/terrafrom-aws-iam-policy-document-condition-correct-syntax
    +# https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/iam_policy_document#condition
    +data "aws_iam_policy_document" "jenkins_master" {
    +  statement {
    +    actions = [
    +      "ecs:RegisterTaskDefinition",
    +      "ecs:ListClusters",
    +      "ecs:DescribeContainerInstances",
    +      "ecs:ListTaskDefinitions",
    +      "ecs:DescribeTaskDefinition",
    +      "ecs:DeregisterTaskDefinition",
    +    ]
    +    resources = ["*"]
    +    effect    = "Allow"
    +  }
    +
    +  # Listing of Container Instances
    +  statement {
    +    actions   = ["ecs:ListContainerInstances"]
    +    effect    = "Allow"
    +    resources = [aws_ecs_cluster.JenkinsThesisAwsDev.arn]
    +  }
    +  # Run Tasks in ECS
    +  statement {
    +    actions   = ["ecs:RunTask"]
    +    effect    = "Allow"
    +    resources = ["arn:aws:ecs:${data.aws_region.current.name}:526531137161:task-definition/*"]
    +    condition {
    +      test     = "ArnEquals"
    +      variable = "ecs:cluster"
    +      values = [
    +        aws_ecs_cluster.JenkinsThesisAwsDev.arn
    +      ]
    +    }
    +  }
    +
    +  statement {
    +    actions = ["ecs:StopTask"]
    +    effect  = "Allow"
    +    resources = [
    +      "arn:aws:ecs:*:*:task/*",
    +      "arn:aws:ecs:${data.aws_region.current.name}:526531137161:task/*"
    +    ]
    +    condition {
    +      test     = "ArnEquals"
    +      variable = "ecs:cluster"
    +      values = [
    +        aws_ecs_cluster.JenkinsThesisAwsDev.arn
    +      ]
    +    }
    +  }
    +  statement {
    +    actions = ["ecs:DescribeTasks"]
    +    effect  = "Allow"
    +    resources = [
    +      "arn:aws:ecs:*:*:task/*",
    +      "arn:aws:ecs:${data.aws_region.current.name}:526531137161:task/*"
    +    ]
    +    condition {
    +      test     = "ArnEquals"
    +      variable = "ecs:cluster"
    +      values = [
    +        aws_ecs_cluster.JenkinsThesisAwsDev.arn
    +      ]
    +    }
    +  }
    +
    +  statement {
    +    actions   = ["iam:GetRole", "iam:PassRole"]
    +    effect    = "Allow"
    +    resources = [aws_iam_role.JenkinsThesisAwsDev-task-execution-role.arn]
    +  }
    +
    +  # S3 Bucket Related policys for storing build artifacts
    +  statement {
    +    actions = [
    +      "s3:ListBucket"
    +    ]
    +    effect    = "Allow"
    +    sid       = "AllowListingOfFolder"
    +    resources = [aws_s3_bucket.jenkins_artifact_storage.arn]
    +    condition {
    +      test     = "StringLike"
    +      variable = "s3:prefix"
    +      values   = ["${var.s3_folder_prefix_name}/*"]
    +    }
    +  }
    +
    +  # Allow the listing of bucket locations.
    +  statement {
    +    actions = [
    +      "s3:GetBucketLocation"
    +    ]
    +    effect    = "Allow"
    +    sid       = "AllowListingOfBuckets"
    +    resources = [aws_s3_bucket.jenkins_artifact_storage.arn]
    +  }
    +
    +  statement {
    +    sid    = "AllowS3ActionsInFolder"
    +    effect = "Allow"
    +    actions = [
    +      "s3:PutObject",
    +      "s3:GetObject",
    +      "s3:DeleteObject",
    +      "s3:ListBucket",
    +    ]
    +    resources = ["${aws_s3_bucket.jenkins_artifact_storage.arn}/${var.s3_folder_prefix_name}/*"]
    +  }
    +
    +}
    +
    +# Policy for Jenkins Master to start new Jenkins Agents
    +# https://stackoverflow.com/questions/62831874/terrafrom-aws-iam-policy-document-condition-correct-syntax
    +resource "aws_iam_policy" "jenkins_agents" {
    +
    +  description = "Allows the Jenkins master to start new agents."
    +  name        = "${var.application_name}_ecs_policy"
    +
    +  # Policy
    +  # Hint: Curly braces may not be indented otherwise Terraform fails
    +  policy = data.aws_iam_policy_document.jenkins_master.json
    +}
    +

    Conclusion + +Link to heading

    With everything of the files listed above the necessary AWS infrastructure is created within a few minutes. No need to click or anything else, you just have to run a terraform apply from the commandline. As it’s hard to get all the code inside this document, I created a new repository on Github, where I’ll upload all the required code.

    The cool thing about all the stuff above is, it created everthing and from now on we only have to worry about the container setup. The container setup I’m going to cover in part 2 of the series.

    If you have any questions, do not hesitate to contact me! I hope you enjoyed reading this post and we will see each other in part 2, where we talk about how to configure the Jenkins master and agent.

    See more in part 2 of the series.

    © 2019 - 2024 | Built with ♥️ by Pascal Grundmeier - DevOps engineering at scale.
    \ No newline at end of file diff --git a/posts/infrastructure_flutter_part2/index.html b/posts/infrastructure_flutter_part2/index.html new file mode 100644 index 0000000..4454a87 --- /dev/null +++ b/posts/infrastructure_flutter_part2/index.html @@ -0,0 +1,536 @@ +Developing Flutter apps with cloud infrastructure: Part 2 · Engineering Blog +

    Developing Flutter apps with cloud infrastructure: Part 2

    Introduction + +Link to heading

    Hello again dear reader. This is the 2nd part of the AWS Flutter development series. The first part covered how to create the required infrastructure in AWS with Terraform. This part will cover how the the required Jenkins containers (master/agent) are set up. Let’s dive into it.

    Container setups + +Link to heading

    Jenkins Master container + +Link to heading

    The jenkins master container is the brain of the entire application. It controls and schedules new ECS Jenkins agents if necessary. Every piece of configuration is populated from environment variables as you can see in the dockerfile for the Jenkins master:

     1
    + 2
    + 3
    + 4
    + 5
    + 6
    + 7
    + 8
    + 9
    +10
    +11
    +12
    +13
    +14
    +15
    +16
    +17
    +18
    +
    FROM jenkins/jenkins:2.263.1-lts
    +
    +# Install the Jenkins Plugins from the plugins text file
    +# like described in the docs:
    +# https://github.com/jenkinsci/docker#plugin-installation-manager-cli-preview
    +COPY code/docker/Test/plugins.txt /usr/share/jenkins/ref/plugins.txt
    +RUN /usr/local/bin/install-plugins.sh < /usr/share/jenkins/ref/plugins.txt
    +
    +# Copy the groovy config file
    +COPY code/docker/Test/initialConfig.groovy /usr/share/jenkins/ref/init.groovy.d/initialConfigs.groovy
    +COPY code/docker/Test/jenkins.yaml /usr/share/jenkins/ref/jenkins.yaml
    +
    +# Create the app pipeline from config files
    +COPY code/docker/Test/helloWorld.xml /usr/share/jenkins/ref/jobs/Hello-World/config.xml
    +COPY code/docker/Test/appConfig.xml /usr/share/jenkins/ref/jobs/Flutter-App/config.xml
    +
    +# Disable the installation wizard
    +ENV JAVA_OPTS -Djenkins.install.runSetupWizard=false
    +

    If you want to have a look at the source code you can find it again inside the Github repository I created earlier for this series. The dockerfile basically copies the required configuration files into the container so they are stored inside the container.

    Another idea would be to store the configuration inside environment variables (if it’s supported) or even better: to store the configuration in a volume, which the container can mount. With this you don’t have to create a new container every time you change the config. Instead you just have to reboot your container.

    FileContent
    plugins.txtContains a list with plugins to be installed
    initialConfigs.groovyGroovy settings
    jenkins.yamlJenkins main configuration file
    Helloworld.xml and appconfig.xmlPipeline configuration files

    The table above contains a description of every file that is copied inside the Jenkins master container.

    Jenkins configuration file + +Link to heading

    The most important file is the jenks.yaml file. It contains the settings to configure the Jenkins master and look like this

     1
    + 2
    + 3
    + 4
    + 5
    + 6
    + 7
    + 8
    + 9
    +10
    +11
    +12
    +13
    +14
    +15
    +16
    +17
    +18
    +19
    +20
    +21
    +22
    +23
    +24
    +25
    +26
    +27
    +28
    +29
    +30
    +31
    +32
    +33
    +34
    +35
    +36
    +37
    +38
    +39
    +40
    +41
    +42
    +43
    +44
    +45
    +46
    +47
    +48
    +49
    +50
    +51
    +52
    +53
    +54
    +55
    +56
    +57
    +58
    +59
    +60
    +61
    +62
    +63
    +64
    +65
    +66
    +67
    +68
    +69
    +70
    +71
    +72
    +73
    +74
    +75
    +76
    +77
    +78
    +
    jenkins:
    +  slaveAgentPort: 50000
    +  # System Message which is displayed on the Dashboard.
    +  systemMessage: Jenkins Master for FOM Bachelor Thesis
    +  agentProtocols:
    +    - JNLP4-connect
    +  authorizationStrategy:
    +    loggedInUsersCanDoAnything:
    +      allowAnonymousRead: false
    +  remotingSecurity:
    +    enabled: true
    +  securityRealm:
    +    local:
    +      allowsSignup: false
    +      # Create the local administrator account with data from environment variables
    +      users:
    +        - id: ${JENKINS_ADMIN_USERNAME}
    +          password: ${JENKINS_ADMIN_PASSWORD}
    +  clouds:
    +    - ecs:
    +        credentialsId: ""
    +        # ECS Cluster ARN
    +        cluster: ${ECS_CLUSTER_NAME}
    +        name: ecs-cloud
    +        # Environment Variable for the AWS Region e. g. eu-central-1
    +        regionName: ${AWS_REGION_NAME}
    +        # Local Jenkins URL, also populated by environment variable,
    +        # default is master.jenkins.local
    +        jenkinsUrl: ${JENKINS_URL}
    +        tunnel: ${LOCAL_JENKINS_URL}
    +        templates:
    +          - assignPublicIp: true
    +            # Amount of CPU Resources, configured in Terraform files
    +            cpu: ${CPU_AMOUNT}
    +            memoryReservation: ${MEMORY_AMOUNT}
    +            executionRole: ${EXECUTION_ROLE_ARN}
    +            # Name of the Docker image used -> also environment file
    +            image: ${IMAGE_NAME}
    +            label: Flutter
    +            launchType: FARGATE
    +            logDriver: awslogs
    +            # Logging Options for AWS Cloudwatch,
    +            # populated mainly by environment variables.
    +            logDriverOptions:
    +              - name: awslogs-group
    +                value: ${LOG_GROUP_NAME}
    +              - name: awslogs-region
    +                value: ${AWS_REGION}
    +              - name: awslogs-stream-prefix
    +                value: jenkins-agent
    +            securityGroups: ${SECURITY_GROUP_IDS}
    +            subnets: ${SUBNETS}
    +            templateName: jenkins-flutter-agent
    +            # AWS Fargate platform version, "Latest" refers to 1.3.0 but 1.4.0 is the latest -> environment variable.
    +            platformVersion: "${PLATFORM_VERSION}"
    +            # List of environment variables for the Jenkins Agent.
    +            # Contains stuff like AWS CLI environment variables etc.
    +            # which are populated in the variables.tf file.
    +            # Has to be changed, because environment variables are displayed,
    +            # see https://preview.tinyurl.com/y3eg6kav.
    +            # Like AWS Configuration as Code Secrets Manager Plugin: 
    +            # https://plugins.jenkins.io/configuration-as-code-secret-ssm/
    +            environments:
    +            - name: "AWS_ACCESS_KEY_ID"
    +              value: "${AWS_ACCESS_KEY_ID}"
    +            - name: "AWS_SECRET_ACCESS_KEY"
    +              value: "${AWS_SECRET_ACCESS_KEY}"
    +            - name: "AWS_DEFAULT_REGION"
    +              value: "${AWS_DEFAULT_REGION}"
    +aws:
    +  s3:
    +    # AWS S3 Bucket Name
    +    container: "${BUCKET_NAME}"
    +    disableSessionToken: false
    +    # Bucket Folder
    +    prefix: "${S3_FOLDER_PREFIX}/"
    +    useHttp: false
    +    usePathStyleUrl: false
    +

    The file above contains the settings to configure the Jenkins master. The settings are take from environment variables, which are created by the Terraform files from part 1. Just to mention a few important points:

    • Admin username and password are configured from environmental variables
    • Logging is configured
    • Storage for artifacts for the Jenkins S3 plugin is configured and
    • configuration for test environment is stored (just a hint: store this in in AWS secrets manager or something like, because environmental variables are visible from outside!)

    The other files are not that interesting, but you can find them inside the Github repository.

    Jenkins Agent configuration file + +Link to heading

    On the other hand the is the configuration for the Jenkins agent. The following listing shows the content of the dockerfile:

     1
    + 2
    + 3
    + 4
    + 5
    + 6
    + 7
    + 8
    + 9
    +10
    +11
    +12
    +13
    +14
    +15
    +16
    +17
    +18
    +19
    +20
    +21
    +22
    +23
    +24
    +25
    +26
    +27
    +28
    +29
    +30
    +31
    +32
    +33
    +34
    +35
    +36
    +37
    +38
    +39
    +40
    +41
    +42
    +43
    +44
    +45
    +46
    +47
    +48
    +49
    +50
    +51
    +52
    +
    FROM jenkins/inbound-agent:4.6-1-alpine
    +# Prerequisites
    +# Required for Alpine Linux as it contains no curl
    +# Ruby stuff required for installation of Fastlane
    +USER root
    +RUN apk --no-cache add curl ruby ruby-dev g++ make openssl
    +
    +# Install lcov, currently in Edge branch, testing repo
    +RUN apk --no-cache add lcov \
    +--repository=http://dl-cdn.alpinelinux.org/alpine/edge/testing
    +
    +# Required Settings for Fastlane to work
    +ENV LC_ALL="en_US.UTF-8"
    +ENV LANG="en_US.UTF-8"
    +# Install the Fastlane Ruby package
    +RUN gem install fastlane -N
    +
    +# Create a new user
    +USER jenkins
    +WORKDIR /home/jenkins
    +
    +# Install Android stuff
    +RUN mkdir -p Android/sdk/
    +ENV ANDROID_SDK_ROOT /home/jenkins/Android/Sdk
    +RUN mkdir -p .android && touch .android/repositories.cfg
    +
    +# Setup Android SDK
    +RUN wget -q -O sdk-tools.zip \
    +https://dl.google.com/android/repository/commandlinetools-linux-6858069_latest.zip
    +RUN unzip sdk-tools.zip && rm sdk-tools.zip
    +RUN mv -v cmdline-tools Android/sdk/cmdline-tools/
    +RUN cd Android/sdk/cmdline-tools/bin && yes | \
    +./sdkmanager --sdk_root=$ANDROID_SDK_ROOT --licenses
    +RUN cd Android/sdk/cmdline-tools/bin && \
    +./sdkmanager --sdk_root=$ANDROID_SDK_ROOT "build-tools;29.0.3" "patcher;v4" \
    +"platform-tools" "platforms;android-29" "sources;android-29"
    +
    +# Download Flutter SDK
    +RUN git clone https://github.com/flutter/flutter.git
    +
    +# Update Path Variable with all installed tools
    +ENV PATH "$PATH:/home/jenkins/flutter/bin"
    +
    +# Add Slyph Package for Integration Tests
    +RUN flutter pub global activate sylph 
    +
    +ENV PATH "$PATH:/home/jenkins/.pub-cache/bin"
    +
    +# Run basic checks to download Dark SDK
    +RUN flutter doctor
    +RUN fastlane actions
    +RUN sylph --help
    +

    The dockerfile is based on an official Alpine Linux Jenkins agent image, where all the required Jenkins parts are already installed. Other requirements are:

    • software required for development and installation purposes
    • lcov for test coverage
    • language and encoding settings (utf-8)
    • Fastlane for deployment in the Google Play and Apple app store
    • then the Android SDK is installed inside /home/jenkins/Android/Sdk
    • afterwards the Flutter SDK is installed (needed for compilation) and last but not least
    • Sylph is installed for device testing on AWS

    As this dockerfile is mainly for testing and demonstration purposes there are also a few debugging commandlets to verify the succesful installation of the tools.

    Description of the workflow + +Link to heading

    The idea behind this kind of architecture is pretty simple but effective. As soon as you create the infrastructure with Terraform, the Jenkins master is started once everything is created. The master is configured with environmental variables and a config file where to find the Jenkinsfile with instructions. The Jenkinsfile looks like this:

      1
    +  2
    +  3
    +  4
    +  5
    +  6
    +  7
    +  8
    +  9
    + 10
    + 11
    + 12
    + 13
    + 14
    + 15
    + 16
    + 17
    + 18
    + 19
    + 20
    + 21
    + 22
    + 23
    + 24
    + 25
    + 26
    + 27
    + 28
    + 29
    + 30
    + 31
    + 32
    + 33
    + 34
    + 35
    + 36
    + 37
    + 38
    + 39
    + 40
    + 41
    + 42
    + 43
    + 44
    + 45
    + 46
    + 47
    + 48
    + 49
    + 50
    + 51
    + 52
    + 53
    + 54
    + 55
    + 56
    + 57
    + 58
    + 59
    + 60
    + 61
    + 62
    + 63
    + 64
    + 65
    + 66
    + 67
    + 68
    + 69
    + 70
    + 71
    + 72
    + 73
    + 74
    + 75
    + 76
    + 77
    + 78
    + 79
    + 80
    + 81
    + 82
    + 83
    + 84
    + 85
    + 86
    + 87
    + 88
    + 89
    + 90
    + 91
    + 92
    + 93
    + 94
    + 95
    + 96
    + 97
    + 98
    + 99
    +100
    +101
    +102
    +103
    +104
    +
    pipeline {
    +    
    +    agent {
    +        // Tells the pipeline to use an AWS ECS agent,
    +        // because of the used label.
    +        label 'Flutter'
    +    }
    +
    +    stages {
    +        stage('Build') {
    +            steps {
    +                echo 'Building..'            
    +                // Build the Android App
    +                dir('testing_codelab/step_07/') {
    +                    sh 'flutter build appbundle'
    +                    
    +                    // Archive the Build Artifact on the 
    +                    // created S3 Bucket.
    +                    archiveArtifacts "/build/app/outputs/bundle/release/*"
    +                }
    +            }
    +        }
    +        stage('Check the Code Quality') {
    +            steps {
    +                dir('testing_codelab/step_07/') {
    +                    // Check the Code Quality
    +                    // Flutter Analyze performs a static analysis
    +                    // See here for more details: 
    +                    // https://flutter.dev/docs/reference/flutter-cli#flutter-commands
    +                    echo 'Doing Code Quality Tests'
    +                    sh 'flutter analyze'
    +                }
    +            }
    +        }
    +
    +        stage('Unit Tests') {
    +            steps {
    +                dir('testing_codelab/step_07/') {
    +                    // Run all unit tests
    +                    echo 'Doing Unit Tests'
    +                    // Point to Unit Test directory
    +                    // Coverage is reported to ./coverage/lcov.info
    +                    sh 'flutter test --coverage -r expanded test/models/'
    +                    // Reporting of Unit Test Coverage
    +                    sh 'lcov -l codecov/*'
    +                }
    +            }
    +        }
    +        
    +        stage('Widget Tests') {
    +            
    +            steps {
    +                dir('testing_codelab/step_07/') {
    +                    // Run all Widget tests on the code
    +                    echo 'Running Widget Tests'
    +                    // Runs Widget tests on all files
    +                    sh 'flutter test --coverage -r expanded test/'
    +                    // Reporting of Widget Test Coverage
    +                    sh 'lcov -l codecov/*'
    +                }
    +            }
    +        }
    +        stage('Integration Tests') {
    +            steps {
    +                dir('testing_codelab/step_07/') {
    +                    // Running integration tests on AWS Devicefarm, uses Sylph and a config file
    +                    echo 'Running integrations tests on AWS Devicefarm...'
    +                    sh 'sylph -c sylph.yaml'
    +                }
    +            }
    +        }
    +
    +        stage('Beta Deployment') {
    +            // Deploy as Beta Release if no there is no git tag
    +            when { 
    +                not { 
    +                    buildingTag() 
    +                } 
    +            }
    +
    +            steps {
    +                echo 'Deploying beta version to Play Store'
    +                sh 'fastlane beta'
    +            }
    +        }
    +        stage('Release Deployment') {
    +            // Deploy as full release if the current commit contains a git tag
    +            // Captures screenshots and uploads the app file to playstore
    +            when { 
    +                buildingTag() 
    +            }
    +            steps {
    +                echo 'Deploying release version to Play Store'
    +                sh 'fastlane playstore'
    +            }
    +        }
    +    }
    +    post {
    +        // Post Tasks
    +        always {
    +            echo "None so far..."
    +        }
    +    }
    +}
    +

    The cool part from this now on is, you create the infrastructure automatically and as soon everything is available the pipeline is ready to run. The instructions what to do the master gets from the Jenkinsfile which is inside a Github repository (in this case). If you start a build job, the master start an agent, the agent downloads the entire repository and executes everything as listed inside the Jenkinsfile.

    Overview of the pipeline + +Link to heading

    The pipeline is designed to run in the order as listed inside the Jenkinsfile above:

    1. Build an Android app from the source code (the version for iOS requires a device with MacOS)
    2. the next step is to check the code quality and to do tests (this step requires lcov as mentioned before)
    3. for the last step the build is deployed as beta build, if it contains a tag then it is deployed as productive version to the app stores

    Conclusion + +Link to heading

    This was the 2nd part of the Flutter AWS series. The next and last part will cover the tests and the deployment process. Thanks for reading!

    © 2019 - 2024 | Built with ♥️ by Pascal Grundmeier - DevOps engineering at scale.
    \ No newline at end of file diff --git a/posts/kured/index.html b/posts/kured/index.html new file mode 100644 index 0000000..ace6043 --- /dev/null +++ b/posts/kured/index.html @@ -0,0 +1,86 @@ +Automating Kubernetes operating system updates with Kured, kOps and Flatcar · Engineering Blog +

    Automating Kubernetes operating system updates with Kured, kOps and Flatcar

    Introduction + +Link to heading

    Hello everyone, it’s time for a new post.

    As you may know, operating system updates are a crucial part of IT security. In cloud environments you may have up to thousands of virtual servers, where no engineer can manually update these servers. So what to do, if you want to automate these operating system updates?

    The solution + +Link to heading

    Fortunately, there is a great solution to this problem. This solution is called Kured. Kured enables you as a Kubernetes administrator to automate operating system updates. You can choose between different settings like

    • Day of week, when updates should be installed (e. g. monday to fridays)
    • Start and end time, when updates should be installed (my suggestion: office times like 8 to 2)
    • Labels and annotations that should be added before/while and after the update
    • Webhook notification like Slack or Teams
    • How many servers should restart in parallel
    • “Cooldown phase”, how long to wait between each server

    Kured looks for a file, that says the servers needs to rebooted like /var/run/reboot-required. The file itself can be configured in Kured and as soon this file is detected, Kured coordinates the reboot between all of the servers. If you configure to reboot only one server at a time, Kured will managed this by itself.

    But watch out, deadlocks can happen, if the rebooting server is deleted while restarted!

    Installation and configuration of Kured + +Link to heading

    If you want to install Kured, it’s the easiest way to install it with Helm. But first, we’re going to prepare the configuration for Kured. Some noteworthy settings are:

    • We want to allow reboots only between 8 to 15 UTC
    • Allow reboots only within Mondays to Thursdays
    • Add a notification url
    • Add settings for the automatic release of the lock, as described here
    • Add labels and annotations to the nodes, see here

    The overall values file will look like this:

     1
    + 2
    + 3
    + 4
    + 5
    + 6
    + 7
    + 8
    + 9
    +10
    +11
    +12
    +13
    +14
    +15
    +16
    +17
    +18
    +19
    +20
    +21
    +22
    +23
    +24
    +
    configuration:
    +    # When to finish the reboot window (default "23:59")
    +    endTime: "15" 
    +    # Schedule reboots only on these days (default [mo-sun])
    +    rebootDays: [mon, tue, wed, thu] 
    +    # Notification URL with the syntax as following: 
    +    # https://containrrr.dev/shoutrrr/services/overview/
    +    notifyUrl: "https://webhook.example.com" 
    +    # only reboot after this time (default "0:00")
    +    startTime: "8" 
    +    # time-zone to use (valid zones from "time" golang package)
    +    timeZone: "UTC" 
    +    # log format specified as text or json, defaults to text
    +    logFormat: "text"
    +    # How long to hold the lock after rebooting:
    +    lockReleaseDelay: 5m
    +    # Automatically release the lock after 30 mins if anything went wrong
    +    lockTtl: 30m
    +    # Add annotations to the nodes
    +    annotateNodes: true
    +    # Labels, see 
    +    # https://kured.dev/docs/configuration/#adding-node-labels-before-and-after-reboots
    +    preRebootNodeLabels: [kured=needs-updates]
    +    postRebootNodeLabels: [kured=finished-updates]
    +

    These values allow us to configure Kured as we want, when installing it with Helm. You can find more information on the Kured installation page.

    Finally, we want to install Kured with Helm. To do so just simply run

    1
    +2
    +
    helm repo add kubereboot https://kubereboot.github.io/charts
    +helm install my-release kubereboot/kured
    +

    That’s it! You’ve successfully installed Kured!🥳

    Using Kured together with kOps and Flatcar + +Link to heading

    kOps is a great tool, which allows you to easily setup and maintain a Kubernetes cluster on the major cloud providers. kOps supports a number of different operating systems. One of them is Flatcar, which is a friendly fork of the former ContainerOS.

    Flatcar supports A/B partitions, where the running OS partitions is read only, while system updates are written to the other partition. It’s also an OS designed for container usage, so it fits perfectly in running a Kubernetes cluster.

    I’m not explaining in this post, how to create a new kOps managed Kubernetes cluster in this post. The relevant part for managing the system updates for your nodes is to set the the updatePolicy to external, like this:

    1
    +
    updatePolicy: external
    +

    kOps used to write in their documentation, that they’re managing OS updates for Flatcar, but this was removed in an earlier update, but is still inside their documentation.

    Once you configured your nodegroup to use an external policy, we’re done with the kOps part and the Kured configuration is in place, we finally can test it.

    Hint: You may have to run a rolling update of your cluster with the kops rolling-update cluster command.

    Simply create a file with touch /var/run/reboot-required on any Kubernetes node and watch the reboot magic happen.

    Conclusion + +Link to heading

    In this blog post, I showed you how to combine the great Kured tool with kOps, to manage automatic system updates. I hope you liked the post, if you have any questions, feel free to contact me.

    © 2019 - 2024 | Built with ♥️ by Pascal Grundmeier - DevOps engineering at scale.
    \ No newline at end of file diff --git a/posts/mbr_and_gpt/index.html b/posts/mbr_and_gpt/index.html new file mode 100644 index 0000000..bc77b36 --- /dev/null +++ b/posts/mbr_and_gpt/index.html @@ -0,0 +1,46 @@ +GPT and MBR: Moving from MBR to GPT · Engineering Blog +

    GPT and MBR: Moving from MBR to GPT

    Intro + +Link to heading

    About a year ago I bought a used hard drive from a colleague of mine. This HDD has a size of 3 TiB and is supposed to hold big files like videos, images and some games that are neigher read nor write intensive. Unfortunately I moved from my previous HDD with a Master Boot Record (MBR) and kept using the MBR.

    This turned out to be a problem since MBR doesn’t support partitions larger than 2 TiB so I could not use all of my 3 TiB drive.

    A brief overview about MBR + +Link to heading

    I don’t want to get soo deep into history about MBR because it is pretty old. If you’re interested in it’s history you can find a lot about this on the internet. The MBR contains the bootsector of a disk and starts your operating system.

    As the MBR is pretty old one of it’s downsides is the size limitation of partitions up to 2 TiB. You also can not have more than four primary partitions. If you want to have than four partitions you have to convert a primary partition to an extended partition. This extended partition can hold multiple logical partitions within.

    The new GUID partition table (GPT) + +Link to heading

    After the standardization of the Unified Extensible Firmware Interface (UEFI) where the GPT is also part of BIOS has been used less and instead UEFI became more popular. The GPT of a disk consists of

    • A master boot record in sector 0 (so MBR only operating systems can still boot)
    • A primary GUID partition table
    • At least 128 partitions and drives with a capacity up 8 ZiB
    • Supported operating systems: GNU/Linux, Windows Vista and later

    Moving from MBR to GPT + +Link to heading

    I wanted to this for longer but until now I didn’t have time to read about this topic on the internet. Before making any changes to your system always make up a backup and try to restore some files. I really can recommend using Clonezilla for this, as it’s open source and works with many many filesystems (you should give it a try).

    Converting the Master Boot Record + +Link to heading

    You can easily convert the Master Boot Record with the open source program gdisk. It is already included in the Gparted Live distribution, which you just have to boot and open up a terminal.

    Inside the terminal you just have the following commands:

    1
    +2
    +3
    +4
    +5
    +6
    +7
    +8
    +
    # Run this to the appropiate disk in my casee /dev/sda
    +gdisk /dev/sda
    +# Enter the recovery menu with r
    +r
    +# Load the MBR and create a GPT from this
    +f
    +# Write the data to disk
    +w
    +

    Pretty simple isn’t it? I had to search a little while for this on the internet and you can find a lot of stuff from 3rd party tools. I found a good way for me to this with just four basic commands.

    Summary: GPT vs MBR + +Link to heading

    To sum up you can find in this table which explains some of the differences between the MBR and the GPT.

    MBRGPT
    Number of supported partitionsUp to 4 primary partions or more with an extended partitionUp to 128 partitions (natively!)
    Maximum size of partitionsMaximum size is 2 TiB per partitionAccording to IBM support for up to eight zebibytes
    Supports BIOS / UEFIOnly BIOS supportedYes/Yes
    Supported operating systems:Almost any operating systemMore information can be found here

    If you’re using newer hardware (like a mainboard with an UEFI) it is a good idea to use GPT instead of the old fashioned MBR.

    Further reading + +Link to heading

    © 2019 - 2024 | Built with ♥️ by Pascal Grundmeier - DevOps engineering at scale.
    \ No newline at end of file diff --git a/posts/nginx_forward_proxy/index.html b/posts/nginx_forward_proxy/index.html new file mode 100644 index 0000000..06f248c --- /dev/null +++ b/posts/nginx_forward_proxy/index.html @@ -0,0 +1,184 @@ +Establishing proxy support for an application without proxy support · Engineering Blog +

    Establishing proxy support for an application without proxy support

    Introduction + +Link to heading

    Hello again dear reader :-)! Some time passed since my last blog post, because I have been busy with University, but now since exams are done, I have some more time for creating the latest post. Recently I stumbled upon an application that needed internet access but unfortunately didn’t support a proxy server yet. At that point of the project we had to find a way to allow this application to communicate directly with the internet, but without having a direct connection to the internet. This is the topic of this post.

    Overview of the application + +Link to heading

    The application itself is a HTTP webservice, that allows you to send requests to it which are then forwarded to an SaaS application within the internet. Let’s image we’re calling our webservice without a configured proxy and this would be the result:

    This is an image

    This is actually a pretty simple schematic of the environment, because I removed any firewalls as well as the proxy server of course. This is the way the developer expected the application to work. Unfortunately it could not access the internet without the proxy server.

    Trying to internet access with an own solution + +Link to heading

    Usage of Nginx as forward proxy + +Link to heading

    As I use the Nginx webserver fairly often this became my first idea to find a solution. Nginx works quite well as a reverse proxy, so why not use it as forward proxy then? If you search around the internet for any articles for this. I found a post on Stackexchange, where several answers were made. One replied to use Squid instead, because it does not work with Nginx at all, another replied you can compile it on your own with custom module from Github.

    The problem now is, that the OS we used was Windows. So we had to compile the code for Windows. Any code from Github. In a production environment. Mhh, not a good idea. In the end I tried a few around and ended up with this Nginx config file:

     1
    + 2
    + 3
    + 4
    + 5
    + 6
    + 7
    + 8
    + 9
    +10
    +11
    +12
    +13
    +14
    +15
    +16
    +17
    +18
    +19
    +20
    +21
    +22
    +23
    +24
    +25
    +26
    +27
    +28
    +29
    +30
    +31
    +32
    +33
    +34
    +35
    +36
    +37
    +
    http {
    +    server {
    +        listen 80;
    +        listen 443 ssl http2;
    +
    +        # Path to server certificate
    +        ssl_certificate example.com.crt;
    +        ssl_certificate_key example.com.key;
    +
    +        # Log file directory
    +        access_log logs/forward_proxy.access.log;
    +        error_log  logs/forward_proxy.error.log debug;
    +
    +        location / {
    +
    +            # proxy (default)
    +            set $proxy_host "$http_host";
    +            set $url "$scheme://$http_host$request_uri";
    +
    +            # Set Proxy header
    +            proxy_set_header        Host            $http_host;
    +            proxy_set_header        X-Forwarded-For $proxy_add_x_forwarded_for;
    +            proxy_set_header        X-Scheme        $scheme;
    +            proxy_set_header X-Script-Name $request_uri;  
    +
    +            proxy_redirect off;
    +            proxy_set_header Host $proxy_host;
    +            proxy_set_header X-Forwarded-Host $http_host;
    +
    +            proxy_set_header Request-URL "$scheme://$http_host$request_uri";
    +            set $test_req "http://$http_host$uri$is_args$args;";
    +
    +            # Forward the request to the proxy
    +            proxy_pass "http://123.45.67.89:8080http://$http_host$uri$is_args$args";
    +        }
    +    }
    +}
    +

    Unfortunately this did not help at all. I used Wireshark to follow the network traffic and saw did some troubleshooting with curl. When I used curl to connect to the proxy server you could see it used the HTTP CONNECT method. When we tried to use Nginx we just saw a simple HTTP GET instead.

    At that point I realised we were on the wrong track and had to try something different.

    A different approach with a real forward proxy + +Link to heading

    The second way we tried was using Apache httpd as forward proxy, as this comes with builtin forward proxy support on Windows. The config file itself was quite long, so I’m just listing the most interesting lines here:

     1
    + 2
    + 3
    + 4
    + 5
    + 6
    + 7
    + 8
    + 9
    +10
    +11
    +12
    +13
    +14
    +15
    +16
    +17
    +18
    +19
    +20
    +21
    +22
    +23
    +24
    +25
    +26
    +27
    +28
    +29
    +30
    +31
    +32
    +33
    +34
    +35
    +36
    +37
    +38
    +39
    +40
    +
    <VirtualHost *:443>
    +    SSLEngine on
    +
    +    # DocumentRoot "${SRVROOT}/htdocs"
    +
    +    # Path to server certificates
    +    SSLCertificateFile      "${SRVROOT}\conf\ssl\example.com.crt"
    +    SSLCertificateKeyFile   "${SRVROOT}\conf\ssl\example.com.key"
    +
    +    # Forward Proxy
    +    ProxyRequests On
    +    ProxyVia On
    +
    +    <Proxy "/">
    +        # Require host internal.example.com
    +        Require host server.internal.org
    +        Require ip 127.0.0.1
    +        Require ip 192.168.42.54
    +        ProxyPass "https://example.com" nocanon
    +
    +    </Proxy>
    +    # Pass all requests on to the internal proxy, except for:
    +    NoProxy "*.internal.org" "192.168.42.0/24"
    +
    +    # Forward HTTP and HTTPS requests
    +    ProxyRemote "*" "http://123.45.67.89:8080"
    +    ProxyRemote "*" "https://123.45.67.89:8080"
    +
    +    # enable HTTP/2, if available
    +    Protocols h2 http/1.1
    +</VirtualHost>
    +
    +# Intermediate configuration
    +SSLProtocol             all -SSLv3 -TLSv1 -TLSv1.1
    +SSLCipherSuite          ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:\
    +    ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:\
    +    ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:\
    +    DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384
    +SSLHonorCipherOrder     off
    +SSLSessionTickets       off
    +

    The Apache webserver was configured to listen on the TCP ports 80 and 443 for default HTTPS. The idea was to forward any requests on both of these ports to the actual SaaS provider on the internet, because the server’s webservice was listening on port 8080. To make the server think it’s calling the service on the internet we had to edit the local hosts file like this:

    1
    +2
    +
    127.0.0.1  example.com
    +::1        example.com
    +

    At that point my solution was working quite well. Any traffic the server received on port 80 was forwarded to the proxy server. The next step was to configure the service accordingly to make it work with the proxy solution. But as soon as we started to try out our server we the following message: Invalid certificate for website example.com. What happened here?

    Analysis of the server certificate + +Link to heading

    When I took a look on the actual certificate of the website, I saw that it is using Certificate Transparency:

    This is an image

    The cool thing about certificate transparency is, that nobody else can issue a certificate for your website without knowing it has been issued. This is a default policy since March 2018 as they are required to support this as mentioned by Mozilla.

    So in the end we were able to create proxy support for the application, but it didn’t help us because of certificate transparency. :-)

    © 2019 - 2024 | Built with ♥️ by Pascal Grundmeier - DevOps engineering at scale.
    \ No newline at end of file diff --git a/posts/page/1/index.html b/posts/page/1/index.html new file mode 100644 index 0000000..d25eb0b --- /dev/null +++ b/posts/page/1/index.html @@ -0,0 +1,2 @@ +https://pgrunm.github.io/posts/ + \ No newline at end of file diff --git a/posts/prometheus_instrumenting/index.html b/posts/prometheus_instrumenting/index.html new file mode 100644 index 0000000..7ce2529 --- /dev/null +++ b/posts/prometheus_instrumenting/index.html @@ -0,0 +1,212 @@ +Using Prometheus for application metrics · Engineering Blog +

    Using Prometheus for application metrics

    One really important aspect of system and application engineering is monitoring. How do you know if your application, script or system is fine? Google for example uses their own software (called Borgmon) for monitoring. The open source software Prometheus allows to capture time-series data in order to monitor different statistics of an application like Borgmon does. Let’s see how we can do this on our own.

    The basics of Prometheus + +Link to heading

    Some time ago I wrote a small Go application which parses RSS feeds and displays them as a simple html overview with a local webserver. Right now you can see some statistics like the time it took to download and render all feeds inside the console. As this is nice to know it’s difficult to monitor this.

    Prometheus allows us to add some code inside our application to add another http listener where the metrics are displayed. As soon as we added this handler we can add different metric types like counters or gauges.

    To add the webhandler is just had to use the following:

    1
    +2
    +3
    +
    // Adding the Prmetheus HTTP handler
    +http.Handle("/metrics", promhttp.Handler())
    +go http.ListenAndServe(":2112", nil)
    +

    Do mind the keyword go here. This ensures that the http handler runs inside a Goroutine which is executed asynchronously. By using this routine all requests to the metrics endpoint are handled by this routine without the need to wait for a response. If you would leave the go keyword the 2nd http handler would not be started.

    As soon as you add the http handler some default Golang metrics are exposed like

    • Total http responses (statuscode 200, 500 and 503)
    • number of Go threads
    • seconds since start time

    This is already helpful but doesn’t say too much about our application. Let’s add our own metrics.

    Adding metrics for the application + +Link to heading

    Before just adding any metrics it’s important to ask yourself

    • What are the most important parts of the application to be instrumented (e. g. successfully processed input)?
    • Is the error rate suddenly increasing?
    • Does the (average) response time increase?
    • Anything else important you need to monitor?

    If you can answer these questions and implement metrics at these spots it’s easy to implement metrics for the most important parts of the applications.

    Diving into the code + +Link to heading

    For my RSS CLI I splittet the code in two different files, where one files hold the main function and the other file all necessary functions. To use the Prometheus metrics globally I had to add them inside the global variable space like this:

     1
    + 2
    + 3
    + 4
    + 5
    + 6
    + 7
    + 8
    + 9
    +10
    +11
    +12
    +13
    +14
    +15
    +16
    +17
    +18
    +19
    +20
    +21
    +22
    +23
    +
    var (
    +	// This is not the full list but a snapshot.
    +	// Prometheus variables for metrics
    +	opsProcessed = promauto.NewCounter(prometheus.CounterOpts{
    +		Name: "rss_reader_total_requests",
    +		Help: "The total number of processed events",
    +	})
    +	cacheHits = promauto.NewCounter(prometheus.CounterOpts{
    +		Name: "total_number_of_cache_hits",
    +		Help: "The total number of processed events answered by cache",
    +	})
    +	rssRequests = promauto.NewCounter(prometheus.CounterOpts{
    +		Name: "total_number_of_rss_requests",
    +		Help: "The total number of requests sent to get rss feeds",
    +	})
    +
    +	// See: https://godoc.org/github.com/prometheus/client_golang/prometheus#Summary
    +	responseTime = prometheus.NewSummary(prometheus.SummaryOpts{
    +		Name:       "response_time_summary",
    +		Help:       "The sum of response times.",
    +		Objectives: map[float64]float64{0.5: 0.05, 0.9: 0.01, 0.99: 0.001},
    +	})
    +)
    +

    You can now access these variables from all over the program and also other files. For example, if a request has been processed successfully the program increments the counter opsProcessed by 1 with opsProcessed.Inc(). The full sourcecode for the function looks like this:

     1
    + 2
    + 3
    + 4
    + 5
    + 6
    + 7
    + 8
    + 9
    +10
    +11
    +12
    +13
    +14
    +15
    +16
    +17
    +18
    +19
    +20
    +21
    +22
    +23
    +24
    +25
    +26
    +27
    +28
    +29
    +30
    +31
    +32
    +33
    +34
    +35
    +36
    +37
    +38
    +39
    +40
    +41
    +42
    +43
    +44
    +45
    +46
    +47
    +48
    +49
    +50
    +51
    +52
    +53
    +54
    +55
    +56
    +57
    +58
    +59
    +60
    +61
    +62
    +63
    +64
    +65
    +66
    +67
    +68
    +69
    +70
    +
    // ParseFeeds allows to get feeds from a site.
    +func ParseFeeds(siteURL, proxyURL string, news chan<- *gofeed.Feed) {
    +
    +	// Measure the execution time of this function
    +	defer duration(track("ParseFeeds for site " + siteURL))
    +
    +	// When finished, write it to the channel
    +	defer wg.Done()
    +
    +    // Proxy URL see 
    +    // https://stackoverflow.com/questions/14661511/setting-up-proxy-for-http-client
    +	var client http.Client
    +
    +	// Proxy URL is given
    +	if len(proxyURL) > 0 {
    +		proxyURL, err := url.Parse(proxyURL)
    +		if err != nil {
    +			fmt.Println(err)
    +		}
    +
    +		client = http.Client{Transport: &http.Transport{Proxy: http.ProxyURL(proxyURL)}}
    +	} else {
    +		client = http.Client{}
    +	}
    +
    +	item, found := c.Get(siteURL)
    +	if found {
    +		//  Type assertion see: https://golangcode.com/convert-interface-to-number/
    +		news <- item.(*gofeed.Feed)
    +
    +		// Increase the counter for cache hits
    +		cacheHits.Inc()
    +	} else {
    +		// rate limit the feed parsing
    +		<-throttle
    +
    +		rssRequests.Inc()
    +
    +        // Changed this to NewRequest as the golang docs 
    +        // says you need this for custom headers
    +		req, err := http.NewRequest("GET", siteURL, nil)
    +		if err != nil {
    +			log.Fatalln(err)
    +		}
    +
    +		// Set a custom user header because some site block away default crawlers
    +		req.Header.Set("User-Agent", "Golang/RSS_Reader by Warryz")
    +
    +		// Get the Feed of the particular website
    +		resp, err := client.Do(req)
    +
    +		if err != nil {
    +			fmt.Println(err)
    +		} else {
    +			defer resp.Body.Close()
    +			if resp.StatusCode == 200 {
    +				// Read the response and parse it as string
    +				body, _ := ioutil.ReadAll(resp.Body)
    +				fp := gofeed.NewParser()
    +				feed, _ := fp.ParseString(string(body))
    +
    +				// Return the feed with all its items.
    +				if feed != nil {
    +					c.Set(siteURL, feed, cache.DefaultExpiration)
    +					news <- feed
    +				}
    +			}
    +		}
    +	}
    +}
    +

    As you can see we now instrumented our application with several metrics that are now displayed with our 2nd webhandler. Everytime I send a request to the webserver the metrics are now adjusted like how many requests were answered by the cache or had to send a request out to the internet.

    You can find an official example from Prometheus here. I also found a presentation of Google’s monitoring on the internet, maybe this helps you too. Thanks for reading and I hope you enjoyed the article!

    © 2019 - 2024 | Built with ♥️ by Pascal Grundmeier - DevOps engineering at scale.
    \ No newline at end of file diff --git a/posts/raspi4_setup/index.html b/posts/raspi4_setup/index.html new file mode 100644 index 0000000..d469332 --- /dev/null +++ b/posts/raspi4_setup/index.html @@ -0,0 +1,188 @@ +Setting up the new Raspberry Pi 4 with Ansible · Engineering Blog +

    Setting up the new Raspberry Pi 4 with Ansible

    Since June 2019 the new Raspberry Pi 4 is available to buy. It features much more memory (up to 4 GiB), a Gigabit Ethernet port and two USB 3 ports. So there is a lot of power to compute with, but before we can start playing with it, we have to set it up.

    One more thing to say: I don’t want to manage my Pi by CLI but with Ansible. So any setting or command I’ll have to use will be implemented by using an Ansible playbook.

    Preparing the Raspi + +Link to heading

    As Linux servers are supposed to be used with the commandline I’m using no GUI on my Pi but Raspbian Lite. This small image only contains most basic software to run the Raspi. The last thing we have to do is writing the image to an sd card like describe here.

    I want to enable SSH by default at startup. To do this I had to create a file called ssh on /boot. By doing this the SSH daemon is automatically started on startup.

    Configuring some basic settings + +Link to heading

    To be able to configure settings with Ansible one way to manage my Pi is to add it’s IP address to my hosts file which could like this:

     1
    + 2
    + 3
    + 4
    + 5
    + 6
    + 7
    + 8
    + 9
    +10
    +11
    +12
    +13
    +14
    +15
    +
    ---
    +all:
    +  vars:
    +    ansible_ssh_user: pascal
    +    user_ssh_pub_key: "{{ lookup('file','~/ssh_key_raspi') }}"
    +
    +  children:
    +    pis:
    +      # List of Raspberry Pis
    +      hosts:
    +        192.168.200.150:
    +    new_pis:
    +      # Contains only new Pis
    +      hosts:
    +        192.168.200.151:  
    +

    You may notice that there are two different groups: pis and new_pis. This is because the new Raspberry Pi is still naked and there is no public SSH key deposited which allows seamless remote access.

    When I created my new SSH key pair I looked for technical recommendations from German Federal Office for Information Security (or BSI in German). They recommend in the directive TR-02102-4 things like

    • use only SSH version 2
    • enable only public key authentication
    • the use of a key algorithm like ecdsa-sha2-* (which you according to current knowledge can use until at least 2025)

    Making the Pi managed by SSH key + +Link to heading

    Like mentioned before I generated already before an SSH key like described here. With this playbook I created a the new user pascal for me, copied the ssh onto the remote machine and deleted the default user pi for security reasons.

     1
    + 2
    + 3
    + 4
    + 5
    + 6
    + 7
    + 8
    + 9
    +10
    +11
    +12
    +13
    +14
    +15
    +16
    +17
    +
    ---
    +- hosts: new_pis
    +  tasks:
    +    - name: Add the user 'pascal'
    +      user:
    +        name: pascal
    +
    +    - name: Set authorized key taken from file
    +      authorized_key:
    +        user: "pascal"
    +        state: present
    +        key: "{{ lookup('file', '/home/pascal/pub') }}"
    +    - name: Remove the default user 'pi'
    +      user:
    +        name: pi
    +        ensure: absent
    +        ansible_ssh_user: pascal
    +

    Now I already can move the IP address in the hosts.yml file from new_pis to pis as it’s now accessible with my SSH key.

    Some more configuration + +Link to heading

    There are more settings that I need to configure like

    • Enable passwordless sudo for my account
    • Disable password based authentication (so only public key based authentication is enabled)
    • Disable root login
     1
    + 2
    + 3
    + 4
    + 5
    + 6
    + 7
    + 8
    + 9
    +10
    +11
    +12
    +13
    +14
    +15
    +16
    +17
    +18
    +19
    +20
    +21
    +22
    +23
    +24
    +25
    +26
    +27
    +28
    +29
    +30
    +31
    +32
    +
    ---
    +- hosts: new_pis
    +  # You may need to add --ask-become-pass -b on the command line
    +  become: yes
    +  tasks:
    +    - name: Allow passwordless sudo for my account
    +      lineinfile:
    +        path: /etc/sudoers
    +        state: present
    +        line: "pascal ALL=(ALL) NOPASSWD: ALL"
    +        validate: "visudo -cf %s"
    +
    +    - name: Disallow password authentication
    +      lineinfile:
    +        path: /etc/ssh/sshd_config
    +        regexp: "^PasswordAuthentication"
    +        line: "PasswordAuthentication no"
    +        state: present
    +      notify:
    +        - Restart ssh
    +
    +    - name: Disable root login
    +      lineinfile:
    +        path: /etc/ssh/sshd_config
    +        regexp: "^PermitRootLogin"
    +        line: "PermitRootLogin no"
    +        state: present
    +
    +    - name: Restart ssh daemon
    +      service:
    +        name: sshd
    +        state: restarted
    +

    While running this I of course had to add my sudo password with the commandline parameter -K to supply the password to become root.

    When all the settings are implemented the SSH daemon is restarted by a handler.

    Getting the latest updates + +Link to heading

    One last step before the Pi is ready to serve it’s duty is installing the latest updates. You can do this of course by running an apt update; apt upgrade -y on the commandline, but like I mentioned earlier I don’t want to run commands by hand. So I created another playbook for this purpose:

     1
    + 2
    + 3
    + 4
    + 5
    + 6
    + 7
    + 8
    + 9
    +10
    +11
    +12
    +13
    +14
    +15
    +16
    +17
    +18
    +
    ---
    +- hosts: pis
    +  become: yes
    +  tasks:
    +    - name: Ping the Raspi
    +      ping:
    +
    +    # See: https://docs.ansible.com/ansible/latest/modules/apt_module.html#examples
    +    - name: Run apt update
    +      apt:
    +        update_cache: yes
    +      become: true
    +
    +    - name: Run apt upgrade
    +      apt:
    +        name: "*"
    +        state: latest
    +      become: true
    +

    You may noticed that I’m using here the apt module because I’m using raspbian. This allows me to run apt update and apt upgrade.

    There is also the generic package module which allows you to write playbooks that work with any package manager. Unfortunately this doesn’t allow you to just get the latest updates so I’m using the apt module here.

    Conclusion + +Link to heading

    The new Raspberry Pi 4 is now prepared and ready for computing. It can be completely managed by Ansible to do things like installing updates or software, managing users or even installing applications (which of course will happen later).

    I hope you enjoyed reading this article and have a nice day!

    © 2019 - 2024 | Built with ♥️ by Pascal Grundmeier - DevOps engineering at scale.
    \ No newline at end of file diff --git a/posts/ytt/index.html b/posts/ytt/index.html new file mode 100644 index 0000000..ffafdf4 --- /dev/null +++ b/posts/ytt/index.html @@ -0,0 +1,415 @@ +Kubernetes templating with Carvel ytt · Engineering Blog +

    Kubernetes templating with Carvel ytt

    Introduction + +Link to heading

    Hello again, this is another blog post about a great CNCF tool. If you’ve ever worked with Kubernetes manifests, you probably know that editing or creating them by hand can be very painful.

    On the other side, you as a developer or engineer don’t want to edit a lot in these manifests. It is usually better to edit the necessary parts and leave the rest as it was before.

    But how do you manage deployments on a bigger scale? Image for many teams with different services and requirements? Every developer would need knowledge for the manifest files.

    The solution + +Link to heading

    One way to solve this is abstraction. You just enable your developers to fill out only the necessary fields and the rest is automatically generated.

    According to the GitOps principles, the desired state of systems should be

    • Declarative
    • Versioned and immutable
    • Pulled automatically and
    • Continuously Reconciled.

    This can be achieved by using templates. One really great CNCF tool is Carvel ytt. ytt is not only a commandline tool, which allows you to render the templates locally but also in a CI/CD way. Even better, it comes with a local playground, which allows you to play around and test, before you break anything inside the templating.

    Preparing the required data + +Link to heading

    I’m starting with a simple example: Image you want to deploy a Prometheus exporter inside of Kubernetes. ytt uses Starlark as a Python based programming language. With this language, you can create powerful templating mechanisms.

    You start by creating a simple values file, with all necessary but basic settings, which looks like this:

     1
    + 2
    + 3
    + 4
    + 5
    + 6
    + 7
    + 8
    + 9
    +10
    +11
    +12
    +13
    +14
    +15
    +16
    +17
    +18
    +19
    +20
    +21
    +22
    +23
    +24
    +25
    +26
    +27
    +28
    +29
    +30
    +31
    +32
    +33
    +
    #@data/values
    +---
    +app_name: example-exporter
    +prioritiy_class: low
    +metrics:
    +  scrape: true
    +  port: 9100
    +  path: /metrics
    +labels:
    +  team: devops
    +resources:
    +  mem_limit: 32Mi
    +  mem_requests: 16Mi
    +  cpu_limit: 0.01
    +stages:
    +  - name: dev
    +    namespace: dev
    +    variables:
    +      database: dev.example.com
    +    replicas: 1
    +    version: 0.2
    +  - name: qa
    +    namespace: qa
    +    variables:
    +      database: qa.example.com
    +    replicas: 1
    +    version: 0.2
    +  - name: prod
    +    namespace: prod
    +    variables:
    +      database: prod.example.com
    +    replicas: 1
    +    version: 0.2
    +

    This file contains all the necessary data, to finally create all Kubernetes objects like

    • different deployments per stage (dev, qa and prod) and the namespace
    • a service per stage
    • labels
    • number of replicas
    • resource limits
    • a priority class and
    • annotations for metrics scraping with Prometheus

    Creating the Kubernetes manifest templates + +Link to heading

    The next step is to create the actual Kubernetes manifests for templating. We start with the service object which looks like this:

     1
    + 2
    + 3
    + 4
    + 5
    + 6
    + 7
    + 8
    + 9
    +10
    +11
    +12
    +13
    +14
    +15
    +16
    +17
    +18
    +19
    +20
    +21
    +
    #@ load("@ytt:data", "data")
    +
    +#@ for item in data.values.stages:
    +---
    +apiVersion: v1
    +kind: Service
    +metadata:
    +  labels:
    +    app: #@ data.values.app_name
    +    team: #@ data.values.labels.team
    +  name: #@ data.values.app_name
    +  namespace: #@ item.namespace
    +spec:
    +  ports:
    +    - name: http
    +      port: 9100
    +      protocol: TCP
    +      targetPort: 9100
    +  selector:
    +    app: #@ data.values.app_name
    +#@ end
    +

    You can save this file into a directory, which is called deployment. The next step is to create the actual deployment manifest template:

     1
    + 2
    + 3
    + 4
    + 5
    + 6
    + 7
    + 8
    + 9
    +10
    +11
    +12
    +13
    +14
    +15
    +16
    +17
    +18
    +19
    +20
    +21
    +22
    +23
    +24
    +25
    +26
    +27
    +28
    +29
    +30
    +31
    +32
    +33
    +34
    +35
    +36
    +37
    +38
    +39
    +40
    +41
    +42
    +43
    +44
    +45
    +46
    +47
    +48
    +49
    +50
    +51
    +52
    +53
    +54
    +55
    +56
    +57
    +58
    +59
    +60
    +
    #@ load("@ytt:data", "data")
    +
    +#@ for item in data.values.stages:
    +---
    +apiVersion: apps/v1
    +kind: Deployment
    +metadata:
    +  labels:
    +    app: #@ data.values.app_name
    +    team: #@ data.values.labels.team
    +  name: #@ data.values.app_name
    +  namespace: #@ item.namespace
    +spec:
    +  replicas: #@ item.replicas
    +  selector:
    +    matchLabels:
    +      app: #@ data.values.app_name
    +  strategy:
    +    rollingUpdate:
    +      maxSurge: 1
    +      maxUnavailable: 0
    +    type: RollingUpdate
    +  template:
    +    metadata:
    +      labels:
    +        app: #@ data.values.app_name
    +      annotations:
    +        "prometheus.io/scrape": "true"
    +        "prometheus.io/port": #@ data.values.metrics.port
    +    spec:
    +      containers:
    +        image: #@ "ghcr.io/example-org/" +  data.values.app_name + ":" + str(item.version)
    +        env:
    +        - name: DATABASE
    +            value: #@ item.variables.db
    +        imagePullPolicy: Always
    +        livenessProbe:
    +        failureThreshold: 3
    +        httpGet:
    +            path: #@ data.values.metrics.path
    +            port: #@ data.values.metrics.port
    +        periodSeconds: 10
    +        name: #@ data.values.app_name
    +        ports:
    +        - containerPort: #@ data.values.metrics.port
    +            name: http
    +        readinessProbe:
    +        httpGet:
    +            path: #@ data.values.metrics.path
    +            port: #@ data.values.metrics.port
    +        periodSeconds: 5
    +        resources:
    +        limits:
    +            memory: #@ data.values.resources.mem_limit
    +            cpu: #@ data.values.resources.cpu_limit
    +        requests:
    +            memory: #@ data.values.resources.mem_requests
    +      priorityClassName: #@ data.values.prioritiy_class
    +      restartPolicy: Always
    +#@ end
    +

    Using variables with ytt + +Link to heading

    As you saw, we used a lot of YAML comments within the code. But these of course aren’t comments, these are variables for ytt! If you look at the metadata part, you can see these are all variables:

    1
    +2
    +3
    +4
    +5
    +6
    +
    metadata:
    +  labels:
    +    app: #@ data.values.app_name
    +    team: #@ data.values.labels.team
    +  name: #@ data.values.app_name
    +  namespace: #@ item.namespace
    +

    Even better: We’re creating a manifest per stage! This means, whenever we create a new stage inside the values file, the deployment manifests will be created automatically for us.

    Putting it all together + +Link to heading

    Once everyone of the two above listed files is prepared, we can create the manifest file. This couldn’t be simpler as running the command ytt -f deployment -f values.yaml > deployment.autogen.yaml.

    The finally generated manifest for the prod stage looks like this:

     1
    + 2
    + 3
    + 4
    + 5
    + 6
    + 7
    + 8
    + 9
    +10
    +11
    +12
    +13
    +14
    +15
    +16
    +17
    +18
    +19
    +20
    +21
    +22
    +23
    +24
    +25
    +26
    +27
    +28
    +29
    +30
    +31
    +32
    +33
    +34
    +35
    +36
    +37
    +38
    +39
    +40
    +41
    +42
    +43
    +44
    +45
    +46
    +47
    +48
    +49
    +50
    +51
    +52
    +53
    +54
    +55
    +56
    +57
    +58
    +59
    +60
    +61
    +62
    +63
    +64
    +65
    +66
    +67
    +68
    +69
    +70
    +71
    +72
    +73
    +
    ---
    +apiVersion: apps/v1
    +kind: Deployment
    +metadata:
    +  labels:
    +    app: example-exporter
    +    team: devops
    +  name: example-exporter
    +  namespace: prod
    +spec:
    +  replicas: 1
    +  selector:
    +    matchLabels:
    +      app: example-exporter
    +  strategy:
    +    rollingUpdate:
    +      maxSurge: 1
    +      maxUnavailable: 0
    +    type: RollingUpdate
    +  template:
    +    metadata:
    +      labels:
    +        app: example-exporter
    +      annotations:
    +        prometheus.io/scrape: "true"
    +        prometheus.io/port: 9100
    +    spec:
    +      containers:
    +        image: ghcr.io/example-org/example-exporter:0.2
    +        env:
    +        - name: DATABASE
    +          value: prod.example.com
    +        imagePullPolicy: Always
    +        livenessProbe: null
    +        failureThreshold: 3
    +        httpGet:
    +          path: /metrics
    +          port: 9100
    +        periodSeconds: 10
    +        name: example-exporter
    +        ports:
    +        - containerPort: 9100
    +          name: http
    +        readinessProbe:
    +          httpGet:
    +            path: /metrics
    +            port: 9100
    +          periodSeconds: 5
    +        resources: null
    +        limits:
    +          memory: 32Mi
    +          cpu: 0.01
    +        requests:
    +          memory: 16Mi
    +      priorityClassName: low
    +      restartPolicy: Always
    +---
    +apiVersion: v1
    +kind: Service
    +metadata:
    +  labels:
    +    app: example-exporter
    +    team: devops
    +  name: example-exporter
    +  namespace: prod
    +spec:
    +  ports:
    +  - name: http
    +    port: 9100
    +    protocol: TCP
    +    targetPort: 9100
    +  selector:
    +    app: example-exporter
    +

    And this all comes out from just a single command and a little templating. Every time we change something inside the values file, we can recreate the resulting manifest or even better, render this with a CI/CD setup.

    If you pair this with a Taskfile you can watch for any changes, to automatically render the new manifests.

    Conclusion + +Link to heading

    ytt is a great tool for abstraction, which enables DevOps engineers and developers to automate a lot of their Kubernetes work. If you pair this powerful tool with CI/CD you can easily speed up your deployments, while lowering the entry burden for new developers.

    © 2019 - 2024 | Built with ♥️ by Pascal Grundmeier - DevOps engineering at scale.
    \ No newline at end of file diff --git a/sitemap.xml b/sitemap.xml new file mode 100644 index 0000000..cd8710c --- /dev/null +++ b/sitemap.xml @@ -0,0 +1 @@ +https://pgrunm.github.io/posts/kured/2023-07-15T10:40:30+02:00https://pgrunm.github.io/tags/cncf/2023-07-15T10:40:30+02:00https://pgrunm.github.io/2023-07-15T10:40:30+02:00https://pgrunm.github.io/tags/flatcar/2023-07-15T10:40:30+02:00https://pgrunm.github.io/tags/kops/2023-07-15T10:40:30+02:00https://pgrunm.github.io/tags/kubernetes/2023-07-15T10:40:30+02:00https://pgrunm.github.io/posts/2023-07-15T10:40:30+02:00https://pgrunm.github.io/tags/security/2023-07-15T10:40:30+02:00https://pgrunm.github.io/tags/2023-07-15T10:40:30+02:00https://pgrunm.github.io/posts/ytt/2023-06-25T13:35:33+02:00https://pgrunm.github.io/tags/yaml/2023-06-25T13:35:33+02:00https://pgrunm.github.io/tags/aws/2021-06-21T23:00:00+01:00https://pgrunm.github.io/posts/infrastructure_flutter_part2/2021-06-21T23:00:00+01:00https://pgrunm.github.io/tags/devops/2021-06-21T23:00:00+01:00https://pgrunm.github.io/tags/flutter/2021-06-21T23:00:00+01:00https://pgrunm.github.io/tags/jenkins/2021-06-21T23:00:00+01:00https://pgrunm.github.io/tags/terraform/2021-06-21T23:00:00+01:00https://pgrunm.github.io/posts/infrastructure_flutter_part1/2021-04-21T21:12:43+01:00https://pgrunm.github.io/tags/apache/2020-07-07T18:45:07+01:00https://pgrunm.github.io/posts/nginx_forward_proxy/2020-07-07T18:45:07+01:00https://pgrunm.github.io/tags/forward-proxy/2020-07-07T18:45:07+01:00https://pgrunm.github.io/tags/nginx/2020-07-07T18:45:07+01:00https://pgrunm.github.io/tags/windows/2020-07-07T18:45:07+01:00https://pgrunm.github.io/tags/gpt/2020-04-19T21:45:07+01:00https://pgrunm.github.io/posts/mbr_and_gpt/2020-04-19T21:45:07+01:00https://pgrunm.github.io/tags/hard-drive/2020-04-19T21:45:07+01:00https://pgrunm.github.io/tags/linux/2020-04-19T21:45:07+01:00https://pgrunm.github.io/tags/mbr/2020-04-19T21:45:07+01:00https://pgrunm.github.io/tags/ansible/2020-03-28T18:45:07+01:00https://pgrunm.github.io/tags/raspberry-pi/2020-03-28T18:45:07+01:00https://pgrunm.github.io/posts/raspi4_setup/2020-03-28T18:45:07+01:00https://pgrunm.github.io/tags/go/2020-03-16T19:33:10+01:00https://pgrunm.github.io/tags/metrics/2020-03-16T19:33:10+01:00https://pgrunm.github.io/tags/monitoring/2020-03-16T19:33:10+01:00https://pgrunm.github.io/tags/prometheus/2020-03-16T19:33:10+01:00https://pgrunm.github.io/posts/prometheus_instrumenting/2020-03-16T19:33:10+01:00https://pgrunm.github.io/tags/api/2020-02-26T19:33:10+01:00https://pgrunm.github.io/tags/scalability/2020-02-26T19:33:10+01:00https://pgrunm.github.io/posts/aws_scaling_comparison/2020-02-26T19:33:10+01:00https://pgrunm.github.io/tags/bash/2020-02-13T14:31:31+01:00https://pgrunm.github.io/posts/building_blog_part2/2020-02-13T14:31:31+01:00https://pgrunm.github.io/tags/continuous-deployment/2020-02-13T14:31:31+01:00https://pgrunm.github.io/tags/git/2020-02-13T14:31:31+01:00https://pgrunm.github.io/tags/submodule/2020-02-13T14:31:31+01:00https://pgrunm.github.io/posts/building_blog_part1/2020-02-01T14:31:31+01:00https://pgrunm.github.io/contact/2020-02-01T14:31:31+01:00https://pgrunm.github.io/tags/first-post/2020-02-01T14:31:31+01:00https://pgrunm.github.io/tags/hugo/2020-02-01T14:31:31+01:00https://pgrunm.github.io/about/2020-02-01T14:30:43+01:00https://pgrunm.github.io/categories/ \ No newline at end of file diff --git a/tags/ansible/index.html b/tags/ansible/index.html new file mode 100644 index 0000000..6d268d9 --- /dev/null +++ b/tags/ansible/index.html @@ -0,0 +1,5 @@ +Tag: Ansible · Engineering Blog +

    Tag: Ansible

    © 2019 - 2024 | Built with ♥️ by Pascal Grundmeier - DevOps engineering at scale.
    \ No newline at end of file diff --git a/tags/ansible/index.xml b/tags/ansible/index.xml new file mode 100644 index 0000000..27c28be --- /dev/null +++ b/tags/ansible/index.xml @@ -0,0 +1,2 @@ +Ansible on Engineering Bloghttps://pgrunm.github.io/tags/ansible/Recent content in Ansible on Engineering BlogHugoen-usSat, 28 Mar 2020 18:45:07 +0100Setting up the new Raspberry Pi 4 with Ansiblehttps://pgrunm.github.io/posts/raspi4_setup/Sat, 28 Mar 2020 18:45:07 +0100https://pgrunm.github.io/posts/raspi4_setup/Since June 2019 the new Raspberry Pi 4 is available to buy. It features much more memory (up to 4 GiB), a Gigabit Ethernet port and two USB 3 ports. So there is a lot of power to compute with, but before we can start playing with it, we have to set it up. +One more thing to say: I don&rsquo;t want to manage my Pi by CLI but with Ansible. \ No newline at end of file diff --git a/tags/ansible/page/1/index.html b/tags/ansible/page/1/index.html new file mode 100644 index 0000000..00fcdc1 --- /dev/null +++ b/tags/ansible/page/1/index.html @@ -0,0 +1,2 @@ +https://pgrunm.github.io/tags/ansible/ + \ No newline at end of file diff --git a/tags/apache/index.html b/tags/apache/index.html new file mode 100644 index 0000000..5cd34a2 --- /dev/null +++ b/tags/apache/index.html @@ -0,0 +1,5 @@ +Tag: Apache · Engineering Blog +

    Tag: Apache

    © 2019 - 2024 | Built with ♥️ by Pascal Grundmeier - DevOps engineering at scale.
    \ No newline at end of file diff --git a/tags/apache/index.xml b/tags/apache/index.xml new file mode 100644 index 0000000..31c8827 --- /dev/null +++ b/tags/apache/index.xml @@ -0,0 +1 @@ +Apache on Engineering Bloghttps://pgrunm.github.io/tags/apache/Recent content in Apache on Engineering BlogHugoen-usTue, 07 Jul 2020 18:45:07 +0100Establishing proxy support for an application without proxy supporthttps://pgrunm.github.io/posts/nginx_forward_proxy/Tue, 07 Jul 2020 18:45:07 +0100https://pgrunm.github.io/posts/nginx_forward_proxy/Introduction Link to heading Hello again dear reader :-)! Some time passed since my last blog post, because I have been busy with University, but now since exams are done, I have some more time for creating the latest post. Recently I stumbled upon an application that needed internet access but unfortunately didn&rsquo;t support a proxy server yet. At that point of the project we had to find a way to allow this application to communicate directly with the internet, but without having a direct connection to the internet. \ No newline at end of file diff --git a/tags/apache/page/1/index.html b/tags/apache/page/1/index.html new file mode 100644 index 0000000..97acc72 --- /dev/null +++ b/tags/apache/page/1/index.html @@ -0,0 +1,2 @@ +https://pgrunm.github.io/tags/apache/ + \ No newline at end of file diff --git a/tags/api/index.html b/tags/api/index.html new file mode 100644 index 0000000..a477736 --- /dev/null +++ b/tags/api/index.html @@ -0,0 +1,5 @@ +Tag: API · Engineering Blog +

    Tag: API

    © 2019 - 2024 | Built with ♥️ by Pascal Grundmeier - DevOps engineering at scale.
    \ No newline at end of file diff --git a/tags/api/index.xml b/tags/api/index.xml new file mode 100644 index 0000000..669eb8f --- /dev/null +++ b/tags/api/index.xml @@ -0,0 +1,2 @@ +API on Engineering Bloghttps://pgrunm.github.io/tags/api/Recent content in API on Engineering BlogHugoen-usWed, 26 Feb 2020 19:33:10 +0100Scaling expriments with different AWS serviceshttps://pgrunm.github.io/posts/aws_scaling_comparison/Wed, 26 Feb 2020 19:33:10 +0100https://pgrunm.github.io/posts/aws_scaling_comparison/As part of my studies I had to write an assigment in the module electronic business. I decided to develop some kind of dummy REST api application where I could try different architectures. The reason for me to try this out was to see how the performance changes over time if you increase the load. +I decided to use Go for this project, because it was designed for scalable cloud architectures and if you compile your code you just get a single binary file which you just have to upload to your machine and execute. \ No newline at end of file diff --git a/tags/api/page/1/index.html b/tags/api/page/1/index.html new file mode 100644 index 0000000..aabf631 --- /dev/null +++ b/tags/api/page/1/index.html @@ -0,0 +1,2 @@ +https://pgrunm.github.io/tags/api/ + \ No newline at end of file diff --git a/tags/aws/index.html b/tags/aws/index.html new file mode 100644 index 0000000..1177837 --- /dev/null +++ b/tags/aws/index.html @@ -0,0 +1,7 @@ +Tag: AWS · Engineering Blog +

    Tag: AWS

    © 2019 - 2024 | Built with ♥️ by Pascal Grundmeier - DevOps engineering at scale.
    \ No newline at end of file diff --git a/tags/aws/index.xml b/tags/aws/index.xml new file mode 100644 index 0000000..f05d85d --- /dev/null +++ b/tags/aws/index.xml @@ -0,0 +1,4 @@ +AWS on Engineering Bloghttps://pgrunm.github.io/tags/aws/Recent content in AWS on Engineering BlogHugoen-usMon, 21 Jun 2021 23:00:00 +0100Developing Flutter apps with cloud infrastructure: Part 2https://pgrunm.github.io/posts/infrastructure_flutter_part2/Mon, 21 Jun 2021 23:00:00 +0100https://pgrunm.github.io/posts/infrastructure_flutter_part2/Introduction Link to heading Hello again dear reader. This is the 2nd part of the AWS Flutter development series. The first part covered how to create the required infrastructure in AWS with Terraform. This part will cover how the the required Jenkins containers (master/agent) are set up. Let&rsquo;s dive into it. +Container setups Link to heading Jenkins Master container Link to heading The jenkins master container is the brain of the entire application.Developing Flutter apps with cloud infrastructure: Part 1https://pgrunm.github.io/posts/infrastructure_flutter_part1/Wed, 21 Apr 2021 21:12:43 +0100https://pgrunm.github.io/posts/infrastructure_flutter_part1/Introduction Link to heading Hello again! It&rsquo;s been a while, because I finally finished my studied and I&rsquo;m now Bachelor of Science :-). Anyway, I wanted to create a blog post of my bachelor thesis and this is going to be the one. +The topic of my thesis was how to speed up development performance when developing Flutter applications with cloud infrastructure. The infrastrcture was completely created with Terraform in AWS.Scaling expriments with different AWS serviceshttps://pgrunm.github.io/posts/aws_scaling_comparison/Wed, 26 Feb 2020 19:33:10 +0100https://pgrunm.github.io/posts/aws_scaling_comparison/As part of my studies I had to write an assigment in the module electronic business. I decided to develop some kind of dummy REST api application where I could try different architectures. The reason for me to try this out was to see how the performance changes over time if you increase the load. +I decided to use Go for this project, because it was designed for scalable cloud architectures and if you compile your code you just get a single binary file which you just have to upload to your machine and execute. \ No newline at end of file diff --git a/tags/aws/page/1/index.html b/tags/aws/page/1/index.html new file mode 100644 index 0000000..3396073 --- /dev/null +++ b/tags/aws/page/1/index.html @@ -0,0 +1,2 @@ +https://pgrunm.github.io/tags/aws/ + \ No newline at end of file diff --git a/tags/bash/index.html b/tags/bash/index.html new file mode 100644 index 0000000..a552e5e --- /dev/null +++ b/tags/bash/index.html @@ -0,0 +1,5 @@ +Tag: Bash · Engineering Blog +

    Tag: Bash

    © 2019 - 2024 | Built with ♥️ by Pascal Grundmeier - DevOps engineering at scale.
    \ No newline at end of file diff --git a/tags/bash/index.xml b/tags/bash/index.xml new file mode 100644 index 0000000..198f269 --- /dev/null +++ b/tags/bash/index.xml @@ -0,0 +1,2 @@ +Bash on Engineering Bloghttps://pgrunm.github.io/tags/bash/Recent content in Bash on Engineering BlogHugoen-usThu, 13 Feb 2020 14:31:31 +0100Building my new blog: Part 2https://pgrunm.github.io/posts/building_blog_part2/Thu, 13 Feb 2020 14:31:31 +0100https://pgrunm.github.io/posts/building_blog_part2/In the last post I wrote about my considerations about what software to use for my blog, where to host it and how to set it up. This post contains some more techinical details like the git structure and the deployment process. So then let&rsquo;s dive in. +The git structure Link to heading The hugo projects mentions in their documentation to use a git submodule for the theme. Git explains that you can use this feature to integrate another project into your repository while still getting the latest commits from the other repo. \ No newline at end of file diff --git a/tags/bash/page/1/index.html b/tags/bash/page/1/index.html new file mode 100644 index 0000000..39eab8b --- /dev/null +++ b/tags/bash/page/1/index.html @@ -0,0 +1,2 @@ +https://pgrunm.github.io/tags/bash/ + \ No newline at end of file diff --git a/tags/cncf/index.html b/tags/cncf/index.html new file mode 100644 index 0000000..a74c959 --- /dev/null +++ b/tags/cncf/index.html @@ -0,0 +1,6 @@ +Tag: Cncf · Engineering Blog +

    Tag: Cncf

    © 2019 - 2024 | Built with ♥️ by Pascal Grundmeier - DevOps engineering at scale.
    \ No newline at end of file diff --git a/tags/cncf/index.xml b/tags/cncf/index.xml new file mode 100644 index 0000000..b7fc4f5 --- /dev/null +++ b/tags/cncf/index.xml @@ -0,0 +1,4 @@ +Cncf on Engineering Bloghttps://pgrunm.github.io/tags/cncf/Recent content in Cncf on Engineering BlogHugoen-usSat, 15 Jul 2023 10:40:30 +0200Automating Kubernetes operating system updates with Kured, kOps and Flatcarhttps://pgrunm.github.io/posts/kured/Sat, 15 Jul 2023 10:40:30 +0200https://pgrunm.github.io/posts/kured/Introduction Link to heading Hello everyone, it&rsquo;s time for a new post. +As you may know, operating system updates are a crucial part of IT security. In cloud environments you may have up to thousands of virtual servers, where no engineer can manually update these servers. So what to do, if you want to automate these operating system updates? +The solution Link to heading Fortunately, there is a great solution to this problem.Kubernetes templating with Carvel ytthttps://pgrunm.github.io/posts/ytt/Sun, 25 Jun 2023 13:35:33 +0200https://pgrunm.github.io/posts/ytt/Introduction Link to heading Hello again, this is another blog post about a great CNCF tool. If you&rsquo;ve ever worked with Kubernetes manifests, you probably know that editing or creating them by hand can be very painful. +On the other side, you as a developer or engineer don&rsquo;t want to edit a lot in these manifests. It is usually better to edit the necessary parts and leave the rest as it was before. \ No newline at end of file diff --git a/tags/cncf/page/1/index.html b/tags/cncf/page/1/index.html new file mode 100644 index 0000000..4aa649b --- /dev/null +++ b/tags/cncf/page/1/index.html @@ -0,0 +1,2 @@ +https://pgrunm.github.io/tags/cncf/ + \ No newline at end of file diff --git a/tags/continuous-deployment/index.html b/tags/continuous-deployment/index.html new file mode 100644 index 0000000..3aa0571 --- /dev/null +++ b/tags/continuous-deployment/index.html @@ -0,0 +1,5 @@ +Tag: Continuous Deployment · Engineering Blog +

    Tag: Continuous Deployment

    © 2019 - 2024 | Built with ♥️ by Pascal Grundmeier - DevOps engineering at scale.
    \ No newline at end of file diff --git a/tags/continuous-deployment/index.xml b/tags/continuous-deployment/index.xml new file mode 100644 index 0000000..bebc6b8 --- /dev/null +++ b/tags/continuous-deployment/index.xml @@ -0,0 +1,2 @@ +Continuous Deployment on Engineering Bloghttps://pgrunm.github.io/tags/continuous-deployment/Recent content in Continuous Deployment on Engineering BlogHugoen-usThu, 13 Feb 2020 14:31:31 +0100Building my new blog: Part 2https://pgrunm.github.io/posts/building_blog_part2/Thu, 13 Feb 2020 14:31:31 +0100https://pgrunm.github.io/posts/building_blog_part2/In the last post I wrote about my considerations about what software to use for my blog, where to host it and how to set it up. This post contains some more techinical details like the git structure and the deployment process. So then let&rsquo;s dive in. +The git structure Link to heading The hugo projects mentions in their documentation to use a git submodule for the theme. Git explains that you can use this feature to integrate another project into your repository while still getting the latest commits from the other repo. \ No newline at end of file diff --git a/tags/continuous-deployment/page/1/index.html b/tags/continuous-deployment/page/1/index.html new file mode 100644 index 0000000..802765d --- /dev/null +++ b/tags/continuous-deployment/page/1/index.html @@ -0,0 +1,2 @@ +https://pgrunm.github.io/tags/continuous-deployment/ + \ No newline at end of file diff --git a/tags/devops/index.html b/tags/devops/index.html new file mode 100644 index 0000000..992a287 --- /dev/null +++ b/tags/devops/index.html @@ -0,0 +1,6 @@ +Tag: DevOps · Engineering Blog +

    Tag: DevOps

    © 2019 - 2024 | Built with ♥️ by Pascal Grundmeier - DevOps engineering at scale.
    \ No newline at end of file diff --git a/tags/devops/index.xml b/tags/devops/index.xml new file mode 100644 index 0000000..b0284ae --- /dev/null +++ b/tags/devops/index.xml @@ -0,0 +1,3 @@ +DevOps on Engineering Bloghttps://pgrunm.github.io/tags/devops/Recent content in DevOps on Engineering BlogHugoen-usMon, 21 Jun 2021 23:00:00 +0100Developing Flutter apps with cloud infrastructure: Part 2https://pgrunm.github.io/posts/infrastructure_flutter_part2/Mon, 21 Jun 2021 23:00:00 +0100https://pgrunm.github.io/posts/infrastructure_flutter_part2/Introduction Link to heading Hello again dear reader. This is the 2nd part of the AWS Flutter development series. The first part covered how to create the required infrastructure in AWS with Terraform. This part will cover how the the required Jenkins containers (master/agent) are set up. Let&rsquo;s dive into it. +Container setups Link to heading Jenkins Master container Link to heading The jenkins master container is the brain of the entire application.Developing Flutter apps with cloud infrastructure: Part 1https://pgrunm.github.io/posts/infrastructure_flutter_part1/Wed, 21 Apr 2021 21:12:43 +0100https://pgrunm.github.io/posts/infrastructure_flutter_part1/Introduction Link to heading Hello again! It&rsquo;s been a while, because I finally finished my studied and I&rsquo;m now Bachelor of Science :-). Anyway, I wanted to create a blog post of my bachelor thesis and this is going to be the one. +The topic of my thesis was how to speed up development performance when developing Flutter applications with cloud infrastructure. The infrastrcture was completely created with Terraform in AWS. \ No newline at end of file diff --git a/tags/devops/page/1/index.html b/tags/devops/page/1/index.html new file mode 100644 index 0000000..56a1e2d --- /dev/null +++ b/tags/devops/page/1/index.html @@ -0,0 +1,2 @@ +https://pgrunm.github.io/tags/devops/ + \ No newline at end of file diff --git a/tags/first-post/index.html b/tags/first-post/index.html new file mode 100644 index 0000000..f7b8091 --- /dev/null +++ b/tags/first-post/index.html @@ -0,0 +1,5 @@ +Tag: First Post · Engineering Blog +

    Tag: First Post

    © 2019 - 2024 | Built with ♥️ by Pascal Grundmeier - DevOps engineering at scale.
    \ No newline at end of file diff --git a/tags/first-post/index.xml b/tags/first-post/index.xml new file mode 100644 index 0000000..c7ffb9a --- /dev/null +++ b/tags/first-post/index.xml @@ -0,0 +1,2 @@ +First Post on Engineering Bloghttps://pgrunm.github.io/tags/first-post/Recent content in First Post on Engineering BlogHugoen-usSat, 01 Feb 2020 14:31:31 +0100Building my new blog: Part 1https://pgrunm.github.io/posts/building_blog_part1/Sat, 01 Feb 2020 14:31:31 +0100https://pgrunm.github.io/posts/building_blog_part1/I wanted to create a blog for a long time already, but because of university I had not much spare time. Finally I found some time to create my blog and this post will contain some background information about the software I&rsquo;m using, where it&rsquo;s hosted etc. Enjoy my first post! +What to use Link to heading The first question I asked myself was: What software I&rsquo;m going to use for my blog? \ No newline at end of file diff --git a/tags/first-post/page/1/index.html b/tags/first-post/page/1/index.html new file mode 100644 index 0000000..a03d04f --- /dev/null +++ b/tags/first-post/page/1/index.html @@ -0,0 +1,2 @@ +https://pgrunm.github.io/tags/first-post/ + \ No newline at end of file diff --git a/tags/flatcar/index.html b/tags/flatcar/index.html new file mode 100644 index 0000000..00fb0fc --- /dev/null +++ b/tags/flatcar/index.html @@ -0,0 +1,5 @@ +Tag: Flatcar · Engineering Blog +

    Tag: Flatcar

    © 2019 - 2024 | Built with ♥️ by Pascal Grundmeier - DevOps engineering at scale.
    \ No newline at end of file diff --git a/tags/flatcar/index.xml b/tags/flatcar/index.xml new file mode 100644 index 0000000..b4a9c46 --- /dev/null +++ b/tags/flatcar/index.xml @@ -0,0 +1,3 @@ +Flatcar on Engineering Bloghttps://pgrunm.github.io/tags/flatcar/Recent content in Flatcar on Engineering BlogHugoen-usSat, 15 Jul 2023 10:40:30 +0200Automating Kubernetes operating system updates with Kured, kOps and Flatcarhttps://pgrunm.github.io/posts/kured/Sat, 15 Jul 2023 10:40:30 +0200https://pgrunm.github.io/posts/kured/Introduction Link to heading Hello everyone, it&rsquo;s time for a new post. +As you may know, operating system updates are a crucial part of IT security. In cloud environments you may have up to thousands of virtual servers, where no engineer can manually update these servers. So what to do, if you want to automate these operating system updates? +The solution Link to heading Fortunately, there is a great solution to this problem. \ No newline at end of file diff --git a/tags/flatcar/page/1/index.html b/tags/flatcar/page/1/index.html new file mode 100644 index 0000000..61ec3be --- /dev/null +++ b/tags/flatcar/page/1/index.html @@ -0,0 +1,2 @@ +https://pgrunm.github.io/tags/flatcar/ + \ No newline at end of file diff --git a/tags/flutter/index.html b/tags/flutter/index.html new file mode 100644 index 0000000..63f7454 --- /dev/null +++ b/tags/flutter/index.html @@ -0,0 +1,6 @@ +Tag: Flutter · Engineering Blog +

    Tag: Flutter

    © 2019 - 2024 | Built with ♥️ by Pascal Grundmeier - DevOps engineering at scale.
    \ No newline at end of file diff --git a/tags/flutter/index.xml b/tags/flutter/index.xml new file mode 100644 index 0000000..5f1db6c --- /dev/null +++ b/tags/flutter/index.xml @@ -0,0 +1,3 @@ +Flutter on Engineering Bloghttps://pgrunm.github.io/tags/flutter/Recent content in Flutter on Engineering BlogHugoen-usMon, 21 Jun 2021 23:00:00 +0100Developing Flutter apps with cloud infrastructure: Part 2https://pgrunm.github.io/posts/infrastructure_flutter_part2/Mon, 21 Jun 2021 23:00:00 +0100https://pgrunm.github.io/posts/infrastructure_flutter_part2/Introduction Link to heading Hello again dear reader. This is the 2nd part of the AWS Flutter development series. The first part covered how to create the required infrastructure in AWS with Terraform. This part will cover how the the required Jenkins containers (master/agent) are set up. Let&rsquo;s dive into it. +Container setups Link to heading Jenkins Master container Link to heading The jenkins master container is the brain of the entire application.Developing Flutter apps with cloud infrastructure: Part 1https://pgrunm.github.io/posts/infrastructure_flutter_part1/Wed, 21 Apr 2021 21:12:43 +0100https://pgrunm.github.io/posts/infrastructure_flutter_part1/Introduction Link to heading Hello again! It&rsquo;s been a while, because I finally finished my studied and I&rsquo;m now Bachelor of Science :-). Anyway, I wanted to create a blog post of my bachelor thesis and this is going to be the one. +The topic of my thesis was how to speed up development performance when developing Flutter applications with cloud infrastructure. The infrastrcture was completely created with Terraform in AWS. \ No newline at end of file diff --git a/tags/flutter/page/1/index.html b/tags/flutter/page/1/index.html new file mode 100644 index 0000000..1172808 --- /dev/null +++ b/tags/flutter/page/1/index.html @@ -0,0 +1,2 @@ +https://pgrunm.github.io/tags/flutter/ + \ No newline at end of file diff --git a/tags/forward-proxy/index.html b/tags/forward-proxy/index.html new file mode 100644 index 0000000..081d45c --- /dev/null +++ b/tags/forward-proxy/index.html @@ -0,0 +1,5 @@ +Tag: Forward Proxy · Engineering Blog +

    Tag: Forward Proxy

    © 2019 - 2024 | Built with ♥️ by Pascal Grundmeier - DevOps engineering at scale.
    \ No newline at end of file diff --git a/tags/forward-proxy/index.xml b/tags/forward-proxy/index.xml new file mode 100644 index 0000000..5c0604b --- /dev/null +++ b/tags/forward-proxy/index.xml @@ -0,0 +1 @@ +Forward Proxy on Engineering Bloghttps://pgrunm.github.io/tags/forward-proxy/Recent content in Forward Proxy on Engineering BlogHugoen-usTue, 07 Jul 2020 18:45:07 +0100Establishing proxy support for an application without proxy supporthttps://pgrunm.github.io/posts/nginx_forward_proxy/Tue, 07 Jul 2020 18:45:07 +0100https://pgrunm.github.io/posts/nginx_forward_proxy/Introduction Link to heading Hello again dear reader :-)! Some time passed since my last blog post, because I have been busy with University, but now since exams are done, I have some more time for creating the latest post. Recently I stumbled upon an application that needed internet access but unfortunately didn&rsquo;t support a proxy server yet. At that point of the project we had to find a way to allow this application to communicate directly with the internet, but without having a direct connection to the internet. \ No newline at end of file diff --git a/tags/forward-proxy/page/1/index.html b/tags/forward-proxy/page/1/index.html new file mode 100644 index 0000000..3c48a61 --- /dev/null +++ b/tags/forward-proxy/page/1/index.html @@ -0,0 +1,2 @@ +https://pgrunm.github.io/tags/forward-proxy/ + \ No newline at end of file diff --git a/tags/git/index.html b/tags/git/index.html new file mode 100644 index 0000000..6f4a19c --- /dev/null +++ b/tags/git/index.html @@ -0,0 +1,5 @@ +Tag: Git · Engineering Blog +

    Tag: Git

    © 2019 - 2024 | Built with ♥️ by Pascal Grundmeier - DevOps engineering at scale.
    \ No newline at end of file diff --git a/tags/git/index.xml b/tags/git/index.xml new file mode 100644 index 0000000..f3b7ab1 --- /dev/null +++ b/tags/git/index.xml @@ -0,0 +1,2 @@ +Git on Engineering Bloghttps://pgrunm.github.io/tags/git/Recent content in Git on Engineering BlogHugoen-usThu, 13 Feb 2020 14:31:31 +0100Building my new blog: Part 2https://pgrunm.github.io/posts/building_blog_part2/Thu, 13 Feb 2020 14:31:31 +0100https://pgrunm.github.io/posts/building_blog_part2/In the last post I wrote about my considerations about what software to use for my blog, where to host it and how to set it up. This post contains some more techinical details like the git structure and the deployment process. So then let&rsquo;s dive in. +The git structure Link to heading The hugo projects mentions in their documentation to use a git submodule for the theme. Git explains that you can use this feature to integrate another project into your repository while still getting the latest commits from the other repo. \ No newline at end of file diff --git a/tags/git/page/1/index.html b/tags/git/page/1/index.html new file mode 100644 index 0000000..4766930 --- /dev/null +++ b/tags/git/page/1/index.html @@ -0,0 +1,2 @@ +https://pgrunm.github.io/tags/git/ + \ No newline at end of file diff --git a/tags/go/index.html b/tags/go/index.html new file mode 100644 index 0000000..f60556a --- /dev/null +++ b/tags/go/index.html @@ -0,0 +1,6 @@ +Tag: Go · Engineering Blog +

    Tag: Go

    © 2019 - 2024 | Built with ♥️ by Pascal Grundmeier - DevOps engineering at scale.
    \ No newline at end of file diff --git a/tags/go/index.xml b/tags/go/index.xml new file mode 100644 index 0000000..19c189e --- /dev/null +++ b/tags/go/index.xml @@ -0,0 +1,3 @@ +Go on Engineering Bloghttps://pgrunm.github.io/tags/go/Recent content in Go on Engineering BlogHugoen-usMon, 16 Mar 2020 19:33:10 +0100Using Prometheus for application metricshttps://pgrunm.github.io/posts/prometheus_instrumenting/Mon, 16 Mar 2020 19:33:10 +0100https://pgrunm.github.io/posts/prometheus_instrumenting/One really important aspect of system and application engineering is monitoring. How do you know if your application, script or system is fine? Google for example uses their own software (called Borgmon) for monitoring. The open source software Prometheus allows to capture time-series data in order to monitor different statistics of an application like Borgmon does. Let&rsquo;s see how we can do this on our own. +The basics of Prometheus Link to heading Some time ago I wrote a small Go application which parses RSS feeds and displays them as a simple html overview with a local webserver.Scaling expriments with different AWS serviceshttps://pgrunm.github.io/posts/aws_scaling_comparison/Wed, 26 Feb 2020 19:33:10 +0100https://pgrunm.github.io/posts/aws_scaling_comparison/As part of my studies I had to write an assigment in the module electronic business. I decided to develop some kind of dummy REST api application where I could try different architectures. The reason for me to try this out was to see how the performance changes over time if you increase the load. +I decided to use Go for this project, because it was designed for scalable cloud architectures and if you compile your code you just get a single binary file which you just have to upload to your machine and execute. \ No newline at end of file diff --git a/tags/go/page/1/index.html b/tags/go/page/1/index.html new file mode 100644 index 0000000..0119abd --- /dev/null +++ b/tags/go/page/1/index.html @@ -0,0 +1,2 @@ +https://pgrunm.github.io/tags/go/ + \ No newline at end of file diff --git a/tags/gpt/index.html b/tags/gpt/index.html new file mode 100644 index 0000000..764e4d1 --- /dev/null +++ b/tags/gpt/index.html @@ -0,0 +1,5 @@ +Tag: GPT · Engineering Blog +

    Tag: GPT

    © 2019 - 2024 | Built with ♥️ by Pascal Grundmeier - DevOps engineering at scale.
    \ No newline at end of file diff --git a/tags/gpt/index.xml b/tags/gpt/index.xml new file mode 100644 index 0000000..93f71a3 --- /dev/null +++ b/tags/gpt/index.xml @@ -0,0 +1,2 @@ +GPT on Engineering Bloghttps://pgrunm.github.io/tags/gpt/Recent content in GPT on Engineering BlogHugoen-usSun, 19 Apr 2020 21:45:07 +0100GPT and MBR: Moving from MBR to GPThttps://pgrunm.github.io/posts/mbr_and_gpt/Sun, 19 Apr 2020 21:45:07 +0100https://pgrunm.github.io/posts/mbr_and_gpt/Intro Link to heading About a year ago I bought a used hard drive from a colleague of mine. This HDD has a size of 3 TiB and is supposed to hold big files like videos, images and some games that are neigher read nor write intensive. Unfortunately I moved from my previous HDD with a Master Boot Record (MBR) and kept using the MBR. +This turned out to be a problem since MBR doesn&rsquo;t support partitions larger than 2 TiB so I could not use all of my 3 TiB drive. \ No newline at end of file diff --git a/tags/gpt/page/1/index.html b/tags/gpt/page/1/index.html new file mode 100644 index 0000000..64e2203 --- /dev/null +++ b/tags/gpt/page/1/index.html @@ -0,0 +1,2 @@ +https://pgrunm.github.io/tags/gpt/ + \ No newline at end of file diff --git a/tags/hard-drive/index.html b/tags/hard-drive/index.html new file mode 100644 index 0000000..5ec3c97 --- /dev/null +++ b/tags/hard-drive/index.html @@ -0,0 +1,5 @@ +Tag: Hard Drive · Engineering Blog +

    Tag: Hard Drive

    © 2019 - 2024 | Built with ♥️ by Pascal Grundmeier - DevOps engineering at scale.
    \ No newline at end of file diff --git a/tags/hard-drive/index.xml b/tags/hard-drive/index.xml new file mode 100644 index 0000000..b42e3de --- /dev/null +++ b/tags/hard-drive/index.xml @@ -0,0 +1,2 @@ +Hard Drive on Engineering Bloghttps://pgrunm.github.io/tags/hard-drive/Recent content in Hard Drive on Engineering BlogHugoen-usSun, 19 Apr 2020 21:45:07 +0100GPT and MBR: Moving from MBR to GPThttps://pgrunm.github.io/posts/mbr_and_gpt/Sun, 19 Apr 2020 21:45:07 +0100https://pgrunm.github.io/posts/mbr_and_gpt/Intro Link to heading About a year ago I bought a used hard drive from a colleague of mine. This HDD has a size of 3 TiB and is supposed to hold big files like videos, images and some games that are neigher read nor write intensive. Unfortunately I moved from my previous HDD with a Master Boot Record (MBR) and kept using the MBR. +This turned out to be a problem since MBR doesn&rsquo;t support partitions larger than 2 TiB so I could not use all of my 3 TiB drive. \ No newline at end of file diff --git a/tags/hard-drive/page/1/index.html b/tags/hard-drive/page/1/index.html new file mode 100644 index 0000000..3c86610 --- /dev/null +++ b/tags/hard-drive/page/1/index.html @@ -0,0 +1,2 @@ +https://pgrunm.github.io/tags/hard-drive/ + \ No newline at end of file diff --git a/tags/hugo/index.html b/tags/hugo/index.html new file mode 100644 index 0000000..47f5e7c --- /dev/null +++ b/tags/hugo/index.html @@ -0,0 +1,5 @@ +Tag: Hugo · Engineering Blog +

    Tag: Hugo

    © 2019 - 2024 | Built with ♥️ by Pascal Grundmeier - DevOps engineering at scale.
    \ No newline at end of file diff --git a/tags/hugo/index.xml b/tags/hugo/index.xml new file mode 100644 index 0000000..dbd875a --- /dev/null +++ b/tags/hugo/index.xml @@ -0,0 +1,2 @@ +Hugo on Engineering Bloghttps://pgrunm.github.io/tags/hugo/Recent content in Hugo on Engineering BlogHugoen-usSat, 01 Feb 2020 14:31:31 +0100Building my new blog: Part 1https://pgrunm.github.io/posts/building_blog_part1/Sat, 01 Feb 2020 14:31:31 +0100https://pgrunm.github.io/posts/building_blog_part1/I wanted to create a blog for a long time already, but because of university I had not much spare time. Finally I found some time to create my blog and this post will contain some background information about the software I&rsquo;m using, where it&rsquo;s hosted etc. Enjoy my first post! +What to use Link to heading The first question I asked myself was: What software I&rsquo;m going to use for my blog? \ No newline at end of file diff --git a/tags/hugo/page/1/index.html b/tags/hugo/page/1/index.html new file mode 100644 index 0000000..0dc2011 --- /dev/null +++ b/tags/hugo/page/1/index.html @@ -0,0 +1,2 @@ +https://pgrunm.github.io/tags/hugo/ + \ No newline at end of file diff --git a/tags/index.html b/tags/index.html new file mode 100644 index 0000000..d87fcbb --- /dev/null +++ b/tags/index.html @@ -0,0 +1,37 @@ +Tags · Engineering Blog +

    Tags

    © 2019 - 2024 | Built with ♥️ by Pascal Grundmeier - DevOps engineering at scale.
    \ No newline at end of file diff --git a/tags/index.xml b/tags/index.xml new file mode 100644 index 0000000..d9e31e2 --- /dev/null +++ b/tags/index.xml @@ -0,0 +1 @@ +Tags on Engineering Bloghttps://pgrunm.github.io/tags/Recent content in Tags on Engineering BlogHugoen-usSat, 15 Jul 2023 10:40:30 +0200Cncfhttps://pgrunm.github.io/tags/cncf/Mon, 01 Jan 0001 00:00:00 +0000https://pgrunm.github.io/tags/cncf/Flatcarhttps://pgrunm.github.io/tags/flatcar/Mon, 01 Jan 0001 00:00:00 +0000https://pgrunm.github.io/tags/flatcar/Kopshttps://pgrunm.github.io/tags/kops/Mon, 01 Jan 0001 00:00:00 +0000https://pgrunm.github.io/tags/kops/Kuberneteshttps://pgrunm.github.io/tags/kubernetes/Mon, 01 Jan 0001 00:00:00 +0000https://pgrunm.github.io/tags/kubernetes/Securityhttps://pgrunm.github.io/tags/security/Mon, 01 Jan 0001 00:00:00 +0000https://pgrunm.github.io/tags/security/Yamlhttps://pgrunm.github.io/tags/yaml/Mon, 01 Jan 0001 00:00:00 +0000https://pgrunm.github.io/tags/yaml/AWShttps://pgrunm.github.io/tags/aws/Mon, 01 Jan 0001 00:00:00 +0000https://pgrunm.github.io/tags/aws/DevOpshttps://pgrunm.github.io/tags/devops/Mon, 01 Jan 0001 00:00:00 +0000https://pgrunm.github.io/tags/devops/Flutterhttps://pgrunm.github.io/tags/flutter/Mon, 01 Jan 0001 00:00:00 +0000https://pgrunm.github.io/tags/flutter/Jenkinshttps://pgrunm.github.io/tags/jenkins/Mon, 01 Jan 0001 00:00:00 +0000https://pgrunm.github.io/tags/jenkins/Terraformhttps://pgrunm.github.io/tags/terraform/Mon, 01 Jan 0001 00:00:00 +0000https://pgrunm.github.io/tags/terraform/Apachehttps://pgrunm.github.io/tags/apache/Mon, 01 Jan 0001 00:00:00 +0000https://pgrunm.github.io/tags/apache/Forward Proxyhttps://pgrunm.github.io/tags/forward-proxy/Mon, 01 Jan 0001 00:00:00 +0000https://pgrunm.github.io/tags/forward-proxy/Nginxhttps://pgrunm.github.io/tags/nginx/Mon, 01 Jan 0001 00:00:00 +0000https://pgrunm.github.io/tags/nginx/Windowshttps://pgrunm.github.io/tags/windows/Mon, 01 Jan 0001 00:00:00 +0000https://pgrunm.github.io/tags/windows/GPThttps://pgrunm.github.io/tags/gpt/Mon, 01 Jan 0001 00:00:00 +0000https://pgrunm.github.io/tags/gpt/Hard Drivehttps://pgrunm.github.io/tags/hard-drive/Mon, 01 Jan 0001 00:00:00 +0000https://pgrunm.github.io/tags/hard-drive/Linuxhttps://pgrunm.github.io/tags/linux/Mon, 01 Jan 0001 00:00:00 +0000https://pgrunm.github.io/tags/linux/MBRhttps://pgrunm.github.io/tags/mbr/Mon, 01 Jan 0001 00:00:00 +0000https://pgrunm.github.io/tags/mbr/Ansiblehttps://pgrunm.github.io/tags/ansible/Mon, 01 Jan 0001 00:00:00 +0000https://pgrunm.github.io/tags/ansible/Raspberry Pihttps://pgrunm.github.io/tags/raspberry-pi/Mon, 01 Jan 0001 00:00:00 +0000https://pgrunm.github.io/tags/raspberry-pi/Gohttps://pgrunm.github.io/tags/go/Mon, 01 Jan 0001 00:00:00 +0000https://pgrunm.github.io/tags/go/Metricshttps://pgrunm.github.io/tags/metrics/Mon, 01 Jan 0001 00:00:00 +0000https://pgrunm.github.io/tags/metrics/Monitoringhttps://pgrunm.github.io/tags/monitoring/Mon, 01 Jan 0001 00:00:00 +0000https://pgrunm.github.io/tags/monitoring/Prometheushttps://pgrunm.github.io/tags/prometheus/Mon, 01 Jan 0001 00:00:00 +0000https://pgrunm.github.io/tags/prometheus/APIhttps://pgrunm.github.io/tags/api/Mon, 01 Jan 0001 00:00:00 +0000https://pgrunm.github.io/tags/api/Scalabilityhttps://pgrunm.github.io/tags/scalability/Mon, 01 Jan 0001 00:00:00 +0000https://pgrunm.github.io/tags/scalability/Bashhttps://pgrunm.github.io/tags/bash/Mon, 01 Jan 0001 00:00:00 +0000https://pgrunm.github.io/tags/bash/Continuous Deploymenthttps://pgrunm.github.io/tags/continuous-deployment/Mon, 01 Jan 0001 00:00:00 +0000https://pgrunm.github.io/tags/continuous-deployment/Githttps://pgrunm.github.io/tags/git/Mon, 01 Jan 0001 00:00:00 +0000https://pgrunm.github.io/tags/git/Submodulehttps://pgrunm.github.io/tags/submodule/Mon, 01 Jan 0001 00:00:00 +0000https://pgrunm.github.io/tags/submodule/First Posthttps://pgrunm.github.io/tags/first-post/Mon, 01 Jan 0001 00:00:00 +0000https://pgrunm.github.io/tags/first-post/Hugohttps://pgrunm.github.io/tags/hugo/Mon, 01 Jan 0001 00:00:00 +0000https://pgrunm.github.io/tags/hugo/ \ No newline at end of file diff --git a/tags/jenkins/index.html b/tags/jenkins/index.html new file mode 100644 index 0000000..fb67c27 --- /dev/null +++ b/tags/jenkins/index.html @@ -0,0 +1,6 @@ +Tag: Jenkins · Engineering Blog +

    Tag: Jenkins

    © 2019 - 2024 | Built with ♥️ by Pascal Grundmeier - DevOps engineering at scale.
    \ No newline at end of file diff --git a/tags/jenkins/index.xml b/tags/jenkins/index.xml new file mode 100644 index 0000000..e3d9b75 --- /dev/null +++ b/tags/jenkins/index.xml @@ -0,0 +1,3 @@ +Jenkins on Engineering Bloghttps://pgrunm.github.io/tags/jenkins/Recent content in Jenkins on Engineering BlogHugoen-usMon, 21 Jun 2021 23:00:00 +0100Developing Flutter apps with cloud infrastructure: Part 2https://pgrunm.github.io/posts/infrastructure_flutter_part2/Mon, 21 Jun 2021 23:00:00 +0100https://pgrunm.github.io/posts/infrastructure_flutter_part2/Introduction Link to heading Hello again dear reader. This is the 2nd part of the AWS Flutter development series. The first part covered how to create the required infrastructure in AWS with Terraform. This part will cover how the the required Jenkins containers (master/agent) are set up. Let&rsquo;s dive into it. +Container setups Link to heading Jenkins Master container Link to heading The jenkins master container is the brain of the entire application.Developing Flutter apps with cloud infrastructure: Part 1https://pgrunm.github.io/posts/infrastructure_flutter_part1/Wed, 21 Apr 2021 21:12:43 +0100https://pgrunm.github.io/posts/infrastructure_flutter_part1/Introduction Link to heading Hello again! It&rsquo;s been a while, because I finally finished my studied and I&rsquo;m now Bachelor of Science :-). Anyway, I wanted to create a blog post of my bachelor thesis and this is going to be the one. +The topic of my thesis was how to speed up development performance when developing Flutter applications with cloud infrastructure. The infrastrcture was completely created with Terraform in AWS. \ No newline at end of file diff --git a/tags/jenkins/page/1/index.html b/tags/jenkins/page/1/index.html new file mode 100644 index 0000000..4d4974d --- /dev/null +++ b/tags/jenkins/page/1/index.html @@ -0,0 +1,2 @@ +https://pgrunm.github.io/tags/jenkins/ + \ No newline at end of file diff --git a/tags/kops/index.html b/tags/kops/index.html new file mode 100644 index 0000000..f3f39dd --- /dev/null +++ b/tags/kops/index.html @@ -0,0 +1,5 @@ +Tag: Kops · Engineering Blog +

    Tag: Kops

    © 2019 - 2024 | Built with ♥️ by Pascal Grundmeier - DevOps engineering at scale.
    \ No newline at end of file diff --git a/tags/kops/index.xml b/tags/kops/index.xml new file mode 100644 index 0000000..52e5033 --- /dev/null +++ b/tags/kops/index.xml @@ -0,0 +1,3 @@ +Kops on Engineering Bloghttps://pgrunm.github.io/tags/kops/Recent content in Kops on Engineering BlogHugoen-usSat, 15 Jul 2023 10:40:30 +0200Automating Kubernetes operating system updates with Kured, kOps and Flatcarhttps://pgrunm.github.io/posts/kured/Sat, 15 Jul 2023 10:40:30 +0200https://pgrunm.github.io/posts/kured/Introduction Link to heading Hello everyone, it&rsquo;s time for a new post. +As you may know, operating system updates are a crucial part of IT security. In cloud environments you may have up to thousands of virtual servers, where no engineer can manually update these servers. So what to do, if you want to automate these operating system updates? +The solution Link to heading Fortunately, there is a great solution to this problem. \ No newline at end of file diff --git a/tags/kops/page/1/index.html b/tags/kops/page/1/index.html new file mode 100644 index 0000000..983423f --- /dev/null +++ b/tags/kops/page/1/index.html @@ -0,0 +1,2 @@ +https://pgrunm.github.io/tags/kops/ + \ No newline at end of file diff --git a/tags/kubernetes/index.html b/tags/kubernetes/index.html new file mode 100644 index 0000000..5072071 --- /dev/null +++ b/tags/kubernetes/index.html @@ -0,0 +1,6 @@ +Tag: Kubernetes · Engineering Blog +

    Tag: Kubernetes

    © 2019 - 2024 | Built with ♥️ by Pascal Grundmeier - DevOps engineering at scale.
    \ No newline at end of file diff --git a/tags/kubernetes/index.xml b/tags/kubernetes/index.xml new file mode 100644 index 0000000..6033483 --- /dev/null +++ b/tags/kubernetes/index.xml @@ -0,0 +1,4 @@ +Kubernetes on Engineering Bloghttps://pgrunm.github.io/tags/kubernetes/Recent content in Kubernetes on Engineering BlogHugoen-usSat, 15 Jul 2023 10:40:30 +0200Automating Kubernetes operating system updates with Kured, kOps and Flatcarhttps://pgrunm.github.io/posts/kured/Sat, 15 Jul 2023 10:40:30 +0200https://pgrunm.github.io/posts/kured/Introduction Link to heading Hello everyone, it&rsquo;s time for a new post. +As you may know, operating system updates are a crucial part of IT security. In cloud environments you may have up to thousands of virtual servers, where no engineer can manually update these servers. So what to do, if you want to automate these operating system updates? +The solution Link to heading Fortunately, there is a great solution to this problem.Kubernetes templating with Carvel ytthttps://pgrunm.github.io/posts/ytt/Sun, 25 Jun 2023 13:35:33 +0200https://pgrunm.github.io/posts/ytt/Introduction Link to heading Hello again, this is another blog post about a great CNCF tool. If you&rsquo;ve ever worked with Kubernetes manifests, you probably know that editing or creating them by hand can be very painful. +On the other side, you as a developer or engineer don&rsquo;t want to edit a lot in these manifests. It is usually better to edit the necessary parts and leave the rest as it was before. \ No newline at end of file diff --git a/tags/kubernetes/page/1/index.html b/tags/kubernetes/page/1/index.html new file mode 100644 index 0000000..aff0a9a --- /dev/null +++ b/tags/kubernetes/page/1/index.html @@ -0,0 +1,2 @@ +https://pgrunm.github.io/tags/kubernetes/ + \ No newline at end of file diff --git a/tags/linux/index.html b/tags/linux/index.html new file mode 100644 index 0000000..fca97e6 --- /dev/null +++ b/tags/linux/index.html @@ -0,0 +1,6 @@ +Tag: Linux · Engineering Blog +

    Tag: Linux

    © 2019 - 2024 | Built with ♥️ by Pascal Grundmeier - DevOps engineering at scale.
    \ No newline at end of file diff --git a/tags/linux/index.xml b/tags/linux/index.xml new file mode 100644 index 0000000..2be2e22 --- /dev/null +++ b/tags/linux/index.xml @@ -0,0 +1,3 @@ +Linux on Engineering Bloghttps://pgrunm.github.io/tags/linux/Recent content in Linux on Engineering BlogHugoen-usSun, 19 Apr 2020 21:45:07 +0100GPT and MBR: Moving from MBR to GPThttps://pgrunm.github.io/posts/mbr_and_gpt/Sun, 19 Apr 2020 21:45:07 +0100https://pgrunm.github.io/posts/mbr_and_gpt/Intro Link to heading About a year ago I bought a used hard drive from a colleague of mine. This HDD has a size of 3 TiB and is supposed to hold big files like videos, images and some games that are neigher read nor write intensive. Unfortunately I moved from my previous HDD with a Master Boot Record (MBR) and kept using the MBR. +This turned out to be a problem since MBR doesn&rsquo;t support partitions larger than 2 TiB so I could not use all of my 3 TiB drive.Setting up the new Raspberry Pi 4 with Ansiblehttps://pgrunm.github.io/posts/raspi4_setup/Sat, 28 Mar 2020 18:45:07 +0100https://pgrunm.github.io/posts/raspi4_setup/Since June 2019 the new Raspberry Pi 4 is available to buy. It features much more memory (up to 4 GiB), a Gigabit Ethernet port and two USB 3 ports. So there is a lot of power to compute with, but before we can start playing with it, we have to set it up. +One more thing to say: I don&rsquo;t want to manage my Pi by CLI but with Ansible. \ No newline at end of file diff --git a/tags/linux/page/1/index.html b/tags/linux/page/1/index.html new file mode 100644 index 0000000..29dba3d --- /dev/null +++ b/tags/linux/page/1/index.html @@ -0,0 +1,2 @@ +https://pgrunm.github.io/tags/linux/ + \ No newline at end of file diff --git a/tags/mbr/index.html b/tags/mbr/index.html new file mode 100644 index 0000000..5eb7e00 --- /dev/null +++ b/tags/mbr/index.html @@ -0,0 +1,5 @@ +Tag: MBR · Engineering Blog +

    Tag: MBR

    © 2019 - 2024 | Built with ♥️ by Pascal Grundmeier - DevOps engineering at scale.
    \ No newline at end of file diff --git a/tags/mbr/index.xml b/tags/mbr/index.xml new file mode 100644 index 0000000..778bb42 --- /dev/null +++ b/tags/mbr/index.xml @@ -0,0 +1,2 @@ +MBR on Engineering Bloghttps://pgrunm.github.io/tags/mbr/Recent content in MBR on Engineering BlogHugoen-usSun, 19 Apr 2020 21:45:07 +0100GPT and MBR: Moving from MBR to GPThttps://pgrunm.github.io/posts/mbr_and_gpt/Sun, 19 Apr 2020 21:45:07 +0100https://pgrunm.github.io/posts/mbr_and_gpt/Intro Link to heading About a year ago I bought a used hard drive from a colleague of mine. This HDD has a size of 3 TiB and is supposed to hold big files like videos, images and some games that are neigher read nor write intensive. Unfortunately I moved from my previous HDD with a Master Boot Record (MBR) and kept using the MBR. +This turned out to be a problem since MBR doesn&rsquo;t support partitions larger than 2 TiB so I could not use all of my 3 TiB drive. \ No newline at end of file diff --git a/tags/mbr/page/1/index.html b/tags/mbr/page/1/index.html new file mode 100644 index 0000000..1e75c99 --- /dev/null +++ b/tags/mbr/page/1/index.html @@ -0,0 +1,2 @@ +https://pgrunm.github.io/tags/mbr/ + \ No newline at end of file diff --git a/tags/metrics/index.html b/tags/metrics/index.html new file mode 100644 index 0000000..d1119b0 --- /dev/null +++ b/tags/metrics/index.html @@ -0,0 +1,5 @@ +Tag: Metrics · Engineering Blog +

    Tag: Metrics

    © 2019 - 2024 | Built with ♥️ by Pascal Grundmeier - DevOps engineering at scale.
    \ No newline at end of file diff --git a/tags/metrics/index.xml b/tags/metrics/index.xml new file mode 100644 index 0000000..a478fa9 --- /dev/null +++ b/tags/metrics/index.xml @@ -0,0 +1,2 @@ +Metrics on Engineering Bloghttps://pgrunm.github.io/tags/metrics/Recent content in Metrics on Engineering BlogHugoen-usMon, 16 Mar 2020 19:33:10 +0100Using Prometheus for application metricshttps://pgrunm.github.io/posts/prometheus_instrumenting/Mon, 16 Mar 2020 19:33:10 +0100https://pgrunm.github.io/posts/prometheus_instrumenting/One really important aspect of system and application engineering is monitoring. How do you know if your application, script or system is fine? Google for example uses their own software (called Borgmon) for monitoring. The open source software Prometheus allows to capture time-series data in order to monitor different statistics of an application like Borgmon does. Let&rsquo;s see how we can do this on our own. +The basics of Prometheus Link to heading Some time ago I wrote a small Go application which parses RSS feeds and displays them as a simple html overview with a local webserver. \ No newline at end of file diff --git a/tags/metrics/page/1/index.html b/tags/metrics/page/1/index.html new file mode 100644 index 0000000..a2ec82f --- /dev/null +++ b/tags/metrics/page/1/index.html @@ -0,0 +1,2 @@ +https://pgrunm.github.io/tags/metrics/ + \ No newline at end of file diff --git a/tags/monitoring/index.html b/tags/monitoring/index.html new file mode 100644 index 0000000..5ebd712 --- /dev/null +++ b/tags/monitoring/index.html @@ -0,0 +1,5 @@ +Tag: Monitoring · Engineering Blog +

    Tag: Monitoring

    © 2019 - 2024 | Built with ♥️ by Pascal Grundmeier - DevOps engineering at scale.
    \ No newline at end of file diff --git a/tags/monitoring/index.xml b/tags/monitoring/index.xml new file mode 100644 index 0000000..945f7b1 --- /dev/null +++ b/tags/monitoring/index.xml @@ -0,0 +1,2 @@ +Monitoring on Engineering Bloghttps://pgrunm.github.io/tags/monitoring/Recent content in Monitoring on Engineering BlogHugoen-usMon, 16 Mar 2020 19:33:10 +0100Using Prometheus for application metricshttps://pgrunm.github.io/posts/prometheus_instrumenting/Mon, 16 Mar 2020 19:33:10 +0100https://pgrunm.github.io/posts/prometheus_instrumenting/One really important aspect of system and application engineering is monitoring. How do you know if your application, script or system is fine? Google for example uses their own software (called Borgmon) for monitoring. The open source software Prometheus allows to capture time-series data in order to monitor different statistics of an application like Borgmon does. Let&rsquo;s see how we can do this on our own. +The basics of Prometheus Link to heading Some time ago I wrote a small Go application which parses RSS feeds and displays them as a simple html overview with a local webserver. \ No newline at end of file diff --git a/tags/monitoring/page/1/index.html b/tags/monitoring/page/1/index.html new file mode 100644 index 0000000..4f9b639 --- /dev/null +++ b/tags/monitoring/page/1/index.html @@ -0,0 +1,2 @@ +https://pgrunm.github.io/tags/monitoring/ + \ No newline at end of file diff --git a/tags/nginx/index.html b/tags/nginx/index.html new file mode 100644 index 0000000..4634bc8 --- /dev/null +++ b/tags/nginx/index.html @@ -0,0 +1,5 @@ +Tag: Nginx · Engineering Blog +

    Tag: Nginx

    © 2019 - 2024 | Built with ♥️ by Pascal Grundmeier - DevOps engineering at scale.
    \ No newline at end of file diff --git a/tags/nginx/index.xml b/tags/nginx/index.xml new file mode 100644 index 0000000..03f8f67 --- /dev/null +++ b/tags/nginx/index.xml @@ -0,0 +1 @@ +Nginx on Engineering Bloghttps://pgrunm.github.io/tags/nginx/Recent content in Nginx on Engineering BlogHugoen-usTue, 07 Jul 2020 18:45:07 +0100Establishing proxy support for an application without proxy supporthttps://pgrunm.github.io/posts/nginx_forward_proxy/Tue, 07 Jul 2020 18:45:07 +0100https://pgrunm.github.io/posts/nginx_forward_proxy/Introduction Link to heading Hello again dear reader :-)! Some time passed since my last blog post, because I have been busy with University, but now since exams are done, I have some more time for creating the latest post. Recently I stumbled upon an application that needed internet access but unfortunately didn&rsquo;t support a proxy server yet. At that point of the project we had to find a way to allow this application to communicate directly with the internet, but without having a direct connection to the internet. \ No newline at end of file diff --git a/tags/nginx/page/1/index.html b/tags/nginx/page/1/index.html new file mode 100644 index 0000000..ae35a5a --- /dev/null +++ b/tags/nginx/page/1/index.html @@ -0,0 +1,2 @@ +https://pgrunm.github.io/tags/nginx/ + \ No newline at end of file diff --git a/tags/prometheus/index.html b/tags/prometheus/index.html new file mode 100644 index 0000000..dd064da --- /dev/null +++ b/tags/prometheus/index.html @@ -0,0 +1,5 @@ +Tag: Prometheus · Engineering Blog +

    Tag: Prometheus

    © 2019 - 2024 | Built with ♥️ by Pascal Grundmeier - DevOps engineering at scale.
    \ No newline at end of file diff --git a/tags/prometheus/index.xml b/tags/prometheus/index.xml new file mode 100644 index 0000000..48f9f6f --- /dev/null +++ b/tags/prometheus/index.xml @@ -0,0 +1,2 @@ +Prometheus on Engineering Bloghttps://pgrunm.github.io/tags/prometheus/Recent content in Prometheus on Engineering BlogHugoen-usMon, 16 Mar 2020 19:33:10 +0100Using Prometheus for application metricshttps://pgrunm.github.io/posts/prometheus_instrumenting/Mon, 16 Mar 2020 19:33:10 +0100https://pgrunm.github.io/posts/prometheus_instrumenting/One really important aspect of system and application engineering is monitoring. How do you know if your application, script or system is fine? Google for example uses their own software (called Borgmon) for monitoring. The open source software Prometheus allows to capture time-series data in order to monitor different statistics of an application like Borgmon does. Let&rsquo;s see how we can do this on our own. +The basics of Prometheus Link to heading Some time ago I wrote a small Go application which parses RSS feeds and displays them as a simple html overview with a local webserver. \ No newline at end of file diff --git a/tags/prometheus/page/1/index.html b/tags/prometheus/page/1/index.html new file mode 100644 index 0000000..4af2c96 --- /dev/null +++ b/tags/prometheus/page/1/index.html @@ -0,0 +1,2 @@ +https://pgrunm.github.io/tags/prometheus/ + \ No newline at end of file diff --git a/tags/raspberry-pi/index.html b/tags/raspberry-pi/index.html new file mode 100644 index 0000000..e99e7dc --- /dev/null +++ b/tags/raspberry-pi/index.html @@ -0,0 +1,5 @@ +Tag: Raspberry Pi · Engineering Blog +

    Tag: Raspberry Pi

    © 2019 - 2024 | Built with ♥️ by Pascal Grundmeier - DevOps engineering at scale.
    \ No newline at end of file diff --git a/tags/raspberry-pi/index.xml b/tags/raspberry-pi/index.xml new file mode 100644 index 0000000..717a6c0 --- /dev/null +++ b/tags/raspberry-pi/index.xml @@ -0,0 +1,2 @@ +Raspberry Pi on Engineering Bloghttps://pgrunm.github.io/tags/raspberry-pi/Recent content in Raspberry Pi on Engineering BlogHugoen-usSat, 28 Mar 2020 18:45:07 +0100Setting up the new Raspberry Pi 4 with Ansiblehttps://pgrunm.github.io/posts/raspi4_setup/Sat, 28 Mar 2020 18:45:07 +0100https://pgrunm.github.io/posts/raspi4_setup/Since June 2019 the new Raspberry Pi 4 is available to buy. It features much more memory (up to 4 GiB), a Gigabit Ethernet port and two USB 3 ports. So there is a lot of power to compute with, but before we can start playing with it, we have to set it up. +One more thing to say: I don&rsquo;t want to manage my Pi by CLI but with Ansible. \ No newline at end of file diff --git a/tags/raspberry-pi/page/1/index.html b/tags/raspberry-pi/page/1/index.html new file mode 100644 index 0000000..6ee989e --- /dev/null +++ b/tags/raspberry-pi/page/1/index.html @@ -0,0 +1,2 @@ +https://pgrunm.github.io/tags/raspberry-pi/ + \ No newline at end of file diff --git a/tags/scalability/index.html b/tags/scalability/index.html new file mode 100644 index 0000000..75630e0 --- /dev/null +++ b/tags/scalability/index.html @@ -0,0 +1,5 @@ +Tag: Scalability · Engineering Blog +

    Tag: Scalability

    © 2019 - 2024 | Built with ♥️ by Pascal Grundmeier - DevOps engineering at scale.
    \ No newline at end of file diff --git a/tags/scalability/index.xml b/tags/scalability/index.xml new file mode 100644 index 0000000..3293339 --- /dev/null +++ b/tags/scalability/index.xml @@ -0,0 +1,2 @@ +Scalability on Engineering Bloghttps://pgrunm.github.io/tags/scalability/Recent content in Scalability on Engineering BlogHugoen-usWed, 26 Feb 2020 19:33:10 +0100Scaling expriments with different AWS serviceshttps://pgrunm.github.io/posts/aws_scaling_comparison/Wed, 26 Feb 2020 19:33:10 +0100https://pgrunm.github.io/posts/aws_scaling_comparison/As part of my studies I had to write an assigment in the module electronic business. I decided to develop some kind of dummy REST api application where I could try different architectures. The reason for me to try this out was to see how the performance changes over time if you increase the load. +I decided to use Go for this project, because it was designed for scalable cloud architectures and if you compile your code you just get a single binary file which you just have to upload to your machine and execute. \ No newline at end of file diff --git a/tags/scalability/page/1/index.html b/tags/scalability/page/1/index.html new file mode 100644 index 0000000..e9d381f --- /dev/null +++ b/tags/scalability/page/1/index.html @@ -0,0 +1,2 @@ +https://pgrunm.github.io/tags/scalability/ + \ No newline at end of file diff --git a/tags/security/index.html b/tags/security/index.html new file mode 100644 index 0000000..7943be5 --- /dev/null +++ b/tags/security/index.html @@ -0,0 +1,5 @@ +Tag: Security · Engineering Blog +

    Tag: Security

    © 2019 - 2024 | Built with ♥️ by Pascal Grundmeier - DevOps engineering at scale.
    \ No newline at end of file diff --git a/tags/security/index.xml b/tags/security/index.xml new file mode 100644 index 0000000..8be6e33 --- /dev/null +++ b/tags/security/index.xml @@ -0,0 +1,3 @@ +Security on Engineering Bloghttps://pgrunm.github.io/tags/security/Recent content in Security on Engineering BlogHugoen-usSat, 15 Jul 2023 10:40:30 +0200Automating Kubernetes operating system updates with Kured, kOps and Flatcarhttps://pgrunm.github.io/posts/kured/Sat, 15 Jul 2023 10:40:30 +0200https://pgrunm.github.io/posts/kured/Introduction Link to heading Hello everyone, it&rsquo;s time for a new post. +As you may know, operating system updates are a crucial part of IT security. In cloud environments you may have up to thousands of virtual servers, where no engineer can manually update these servers. So what to do, if you want to automate these operating system updates? +The solution Link to heading Fortunately, there is a great solution to this problem. \ No newline at end of file diff --git a/tags/security/page/1/index.html b/tags/security/page/1/index.html new file mode 100644 index 0000000..9a3b250 --- /dev/null +++ b/tags/security/page/1/index.html @@ -0,0 +1,2 @@ +https://pgrunm.github.io/tags/security/ + \ No newline at end of file diff --git a/tags/submodule/index.html b/tags/submodule/index.html new file mode 100644 index 0000000..2dbe168 --- /dev/null +++ b/tags/submodule/index.html @@ -0,0 +1,5 @@ +Tag: Submodule · Engineering Blog +

    Tag: Submodule

    © 2019 - 2024 | Built with ♥️ by Pascal Grundmeier - DevOps engineering at scale.
    \ No newline at end of file diff --git a/tags/submodule/index.xml b/tags/submodule/index.xml new file mode 100644 index 0000000..0427a2b --- /dev/null +++ b/tags/submodule/index.xml @@ -0,0 +1,2 @@ +Submodule on Engineering Bloghttps://pgrunm.github.io/tags/submodule/Recent content in Submodule on Engineering BlogHugoen-usThu, 13 Feb 2020 14:31:31 +0100Building my new blog: Part 2https://pgrunm.github.io/posts/building_blog_part2/Thu, 13 Feb 2020 14:31:31 +0100https://pgrunm.github.io/posts/building_blog_part2/In the last post I wrote about my considerations about what software to use for my blog, where to host it and how to set it up. This post contains some more techinical details like the git structure and the deployment process. So then let&rsquo;s dive in. +The git structure Link to heading The hugo projects mentions in their documentation to use a git submodule for the theme. Git explains that you can use this feature to integrate another project into your repository while still getting the latest commits from the other repo. \ No newline at end of file diff --git a/tags/submodule/page/1/index.html b/tags/submodule/page/1/index.html new file mode 100644 index 0000000..8402cbe --- /dev/null +++ b/tags/submodule/page/1/index.html @@ -0,0 +1,2 @@ +https://pgrunm.github.io/tags/submodule/ + \ No newline at end of file diff --git a/tags/terraform/index.html b/tags/terraform/index.html new file mode 100644 index 0000000..f38786b --- /dev/null +++ b/tags/terraform/index.html @@ -0,0 +1,6 @@ +Tag: Terraform · Engineering Blog +

    Tag: Terraform

    © 2019 - 2024 | Built with ♥️ by Pascal Grundmeier - DevOps engineering at scale.
    \ No newline at end of file diff --git a/tags/terraform/index.xml b/tags/terraform/index.xml new file mode 100644 index 0000000..ebfc652 --- /dev/null +++ b/tags/terraform/index.xml @@ -0,0 +1,3 @@ +Terraform on Engineering Bloghttps://pgrunm.github.io/tags/terraform/Recent content in Terraform on Engineering BlogHugoen-usMon, 21 Jun 2021 23:00:00 +0100Developing Flutter apps with cloud infrastructure: Part 2https://pgrunm.github.io/posts/infrastructure_flutter_part2/Mon, 21 Jun 2021 23:00:00 +0100https://pgrunm.github.io/posts/infrastructure_flutter_part2/Introduction Link to heading Hello again dear reader. This is the 2nd part of the AWS Flutter development series. The first part covered how to create the required infrastructure in AWS with Terraform. This part will cover how the the required Jenkins containers (master/agent) are set up. Let&rsquo;s dive into it. +Container setups Link to heading Jenkins Master container Link to heading The jenkins master container is the brain of the entire application.Developing Flutter apps with cloud infrastructure: Part 1https://pgrunm.github.io/posts/infrastructure_flutter_part1/Wed, 21 Apr 2021 21:12:43 +0100https://pgrunm.github.io/posts/infrastructure_flutter_part1/Introduction Link to heading Hello again! It&rsquo;s been a while, because I finally finished my studied and I&rsquo;m now Bachelor of Science :-). Anyway, I wanted to create a blog post of my bachelor thesis and this is going to be the one. +The topic of my thesis was how to speed up development performance when developing Flutter applications with cloud infrastructure. The infrastrcture was completely created with Terraform in AWS. \ No newline at end of file diff --git a/tags/terraform/page/1/index.html b/tags/terraform/page/1/index.html new file mode 100644 index 0000000..4f470c8 --- /dev/null +++ b/tags/terraform/page/1/index.html @@ -0,0 +1,2 @@ +https://pgrunm.github.io/tags/terraform/ + \ No newline at end of file diff --git a/tags/windows/index.html b/tags/windows/index.html new file mode 100644 index 0000000..76b1209 --- /dev/null +++ b/tags/windows/index.html @@ -0,0 +1,6 @@ +Tag: Windows · Engineering Blog +

    Tag: Windows

    © 2019 - 2024 | Built with ♥️ by Pascal Grundmeier - DevOps engineering at scale.
    \ No newline at end of file diff --git a/tags/windows/index.xml b/tags/windows/index.xml new file mode 100644 index 0000000..becfa29 --- /dev/null +++ b/tags/windows/index.xml @@ -0,0 +1,2 @@ +Windows on Engineering Bloghttps://pgrunm.github.io/tags/windows/Recent content in Windows on Engineering BlogHugoen-usTue, 07 Jul 2020 18:45:07 +0100Establishing proxy support for an application without proxy supporthttps://pgrunm.github.io/posts/nginx_forward_proxy/Tue, 07 Jul 2020 18:45:07 +0100https://pgrunm.github.io/posts/nginx_forward_proxy/Introduction Link to heading Hello again dear reader :-)! Some time passed since my last blog post, because I have been busy with University, but now since exams are done, I have some more time for creating the latest post. Recently I stumbled upon an application that needed internet access but unfortunately didn&rsquo;t support a proxy server yet. At that point of the project we had to find a way to allow this application to communicate directly with the internet, but without having a direct connection to the internet.GPT and MBR: Moving from MBR to GPThttps://pgrunm.github.io/posts/mbr_and_gpt/Sun, 19 Apr 2020 21:45:07 +0100https://pgrunm.github.io/posts/mbr_and_gpt/Intro Link to heading About a year ago I bought a used hard drive from a colleague of mine. This HDD has a size of 3 TiB and is supposed to hold big files like videos, images and some games that are neigher read nor write intensive. Unfortunately I moved from my previous HDD with a Master Boot Record (MBR) and kept using the MBR. +This turned out to be a problem since MBR doesn&rsquo;t support partitions larger than 2 TiB so I could not use all of my 3 TiB drive. \ No newline at end of file diff --git a/tags/windows/page/1/index.html b/tags/windows/page/1/index.html new file mode 100644 index 0000000..3f781bc --- /dev/null +++ b/tags/windows/page/1/index.html @@ -0,0 +1,2 @@ +https://pgrunm.github.io/tags/windows/ + \ No newline at end of file diff --git a/tags/yaml/index.html b/tags/yaml/index.html new file mode 100644 index 0000000..899124f --- /dev/null +++ b/tags/yaml/index.html @@ -0,0 +1,5 @@ +Tag: Yaml · Engineering Blog +

    Tag: Yaml

    © 2019 - 2024 | Built with ♥️ by Pascal Grundmeier - DevOps engineering at scale.
    \ No newline at end of file diff --git a/tags/yaml/index.xml b/tags/yaml/index.xml new file mode 100644 index 0000000..1817ff1 --- /dev/null +++ b/tags/yaml/index.xml @@ -0,0 +1,2 @@ +Yaml on Engineering Bloghttps://pgrunm.github.io/tags/yaml/Recent content in Yaml on Engineering BlogHugoen-usSun, 25 Jun 2023 13:35:33 +0200Kubernetes templating with Carvel ytthttps://pgrunm.github.io/posts/ytt/Sun, 25 Jun 2023 13:35:33 +0200https://pgrunm.github.io/posts/ytt/Introduction Link to heading Hello again, this is another blog post about a great CNCF tool. If you&rsquo;ve ever worked with Kubernetes manifests, you probably know that editing or creating them by hand can be very painful. +On the other side, you as a developer or engineer don&rsquo;t want to edit a lot in these manifests. It is usually better to edit the necessary parts and leave the rest as it was before. \ No newline at end of file diff --git a/tags/yaml/page/1/index.html b/tags/yaml/page/1/index.html new file mode 100644 index 0000000..50892e4 --- /dev/null +++ b/tags/yaml/page/1/index.html @@ -0,0 +1,2 @@ +https://pgrunm.github.io/tags/yaml/ + \ No newline at end of file