Skip to content

Commit

Permalink
Merge pull request #493 from hendriknielaender/reword_posts
Browse files Browse the repository at this point in the history
chore: reword grafana & cicd post
  • Loading branch information
hendriknielaender authored Oct 14, 2024
2 parents d5c17cc + 2edfc49 commit de475c4
Show file tree
Hide file tree
Showing 2 changed files with 35 additions and 32 deletions.
21 changes: 11 additions & 10 deletions data/blog/analyzing-gitlab-metrics-sqlite-grafana-guide.md
Original file line number Diff line number Diff line change
Expand Up @@ -10,17 +10,17 @@ tags: [GitLab, CI/CD, DevOps, pipeline, automation]

A recent optimization has left us again excited - the pipeline speed for a new project had
significantly improved. Execution times had basically cut in half across the board. Looking at a
few jobs we could confirm this on an individual basis, but we were wondering about the bigger
few jobs, we could confirm this on an individual basis, but we were wondering about the bigger
picture. How had the pipeline improved over a bigger timespan?


## GitLabs Dashboards

Going into this questions the natural start were the existing GitLab dashboards. GitLab has a wide
variety of different dashboards built-in, so it seemed likely that we would find the answer within
GitLab itself.
Having asked this question, the natural start were the existing GitLab dashboards. GitLab has a
wide variety of different dashboards already built-in, so it seemed likely that we would find the
answer there.

### CI/CD Analytics
### GitLab's CI/CD Analytics

Under the projects `Analyze` tab, there is the `CI/CD Analytics` dashboard. A basic view of the
number of successful and total pipeline runs. Below are the pipelines for the most recent
Expand All @@ -32,7 +32,7 @@ analysis.
Other dashboards in the `Analyze` tab sound intriguing, but they mostly display graphs based on
the amount of commits, merge requests, GitLab issues or lines changed.

### Build
### GitLab's Build Tab

The `Build` tab allows a review of the most recent jobs and pipelines, but it lacks insights into
a broader performance picture. The results are simply returned as a paginated table. For pipelines
Expand All @@ -46,13 +46,14 @@ old, with many people wishing for its resolution.

## Grafana

Moving on from the GitLab metrics we turned our eyes towards Grafana.
Leaving empty-handed from the GitLab search, we turned our focus towards Grafana.

### Grafana GitLab Plugin

The best experience can probably be had with the official [Grafana GitLab
datasource](https://grafana.com/docs/plugins/grafana-gitlab-datasource/latest/). It comes free for
Grafana Cloud subscriptions, which also has a free tier, or for any Grafana Enterprise license.
The [Grafana GitLab
datasource](https://grafana.com/docs/plugins/grafana-gitlab-datasource/latest/) seems like a very
powerful plugin. It comes free for Grafana Cloud subscriptions, which also has a free tier, or for
any Grafana Enterprise license.

Unfortunately in our scenario, we cannot simply pipe our corporate GitLab data into a Grafana
Cloud account for some ad-hoc analysis, and our company doesn't have the enterprise license. Though
Expand Down
46 changes: 24 additions & 22 deletions data/blog/optimal-pipeline-setup.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,19 +8,20 @@ tweet: "https://twitter.com/doubletrblblogs/status/1752040836450132384"
tags: [cloud, infrastructure, IaC, GitLab, CI/CD, DevOps, pipeline, automation]
---

At work we have spend a bunch of time together, looking the pipeline of a massive monorepo we were
handling. Through countless hours of doing this, we've arrived at a few "good defaults", or useful
techniques, which we also found useful in other repos. In this post we will share these general
techniques with concrete examples for gitlab, but similar concepts could also be applied to other
CI/CD platforms.
At work we have spend **a lot** of time together, looking the pipeline of a massive monorepo we
are handling. Through countless hours of doing this, we've arrived at a few "good defaults", or
useful techniques, which we also found useful in other repos. In this post we will share these
general techniques with concrete examples for gitlab, but similar concepts could also be applied
to other CI/CD platforms.

## Caching & Artifacts
## Use Caching & Artifacts

Many pipeline systems have the concept of caching and artifacts, and so does
[gitlab](https://docs.gitlab.com/ee/ci/caching/). Our recommendation is to use caches for install
dependencies and artifacts for build results, with fallback caches to master in case the
dependencies didn't change. This setup enables pull requests to bypass the installation job when there are no changes in dependencies.
has changed. Similarly, the artifact has to be build only once.
dependencies and artifacts for build results, with fallback caches to the main branch in case the
dependencies didn't change. This setup enables pull requests to bypass the installation job when
there are no changes in dependencies. has changed. Similarly, the artifact has to be build only
once.

```yaml
variables:
Expand Down Expand Up @@ -51,11 +52,11 @@ variables:
```
## Deployment Job Triggers
## Finetune Job Triggers
The deployment to production on master generally should be a manual action that can be triggered
immediately. It doesn't have to wait for the dev and test deployment to succeed again first. A
merged MR will anyway have had all necessary safety checks succeeding in the Pull Request
The deployment to production on the main branch generally should be a manual action that can be
triggered immediately. It doesn't have to wait for the dev and test deployment to succeed again
first. A merged MR will anyway have had all necessary safety checks succeeding in the Pull Request
already. This make it safe to deploy immediately to production after merge in 99% of the
cases. This also makes it so a hot-fix can be deployed to production asap without waiting for some
dev deployment first.
Expand All @@ -65,7 +66,7 @@ In gitlab, job dependencies are specified via the [needs](https://docs.gitlab.co
needs: []
```
## Pipeline Workflows
## Optimize Common Scenarios
If the pipeline feels slow, it can be helpful to think of pipelines in terms of their user
workflow, like an application. What are the use-cases the pipeline should be supported?
Expand All @@ -77,13 +78,14 @@ We often see these three across our repos:
For each supported use-case we can ask: How much waiting time is there in the pipeline, until I
the job that my workflows needs will be executed? If any use-case somehow needs a bunch of
unrelated other jobs to be executed first, then this can be optimized. The ideal waiting time is
0.
unrelated other jobs to be executed first, then this can be optimized.
## Job Dependencies and Stages
The ideal waiting time is 0.
## Find Dependencies between Stages
Adding more and more stages makes the pipeline easy to understand and satisfactory to look at, but
often it comes at the detriment of speed. Therefor removing and merging stages might yield a
often it comes at the detriment of speed. Therefor, removing and merging stages might yield a
faster pipeline, which is still understandable.
When tests are fast, it might even be advantageous to execute the test together with the build
Expand All @@ -94,7 +96,7 @@ is one that adapts to changes in the repository, to be always as fast as possibl
For further reading, the team at gitlab also has an article focussing on this topic: [Pipeline
Efficiency](https://docs.gitlab.com/ee/ci/pipelines/pipeline_efficiency.html).
## The Build Container
## Speed up the Build Container
If you use big dependencies during deployment, it can be worth it to bundle them already into the
build container. Maintaining 1 or 2 build containers is usually within reason, while it should of
Expand All @@ -110,7 +112,7 @@ Note that if you use shared gitlab runners in your AWS account, it might be usef
company-wide shared build containers. This allows for easier caching of the build container on the
runners, so it doesn't have to be downloaded for every job.
## Great Tooling
## Use Great Tools
Staying up to date on tools, especially in JavaScript is essential. Better tools spring up all the
time and can bring you the deciding advantage in your pipeline. What has been amazing with
Expand All @@ -136,7 +138,7 @@ install:
- pnpm install
```
## Review Apps
## Leverage Review Apps
Review apps or branch-based deployments is something nobody should sleep on. Especially with
serverless, once a team exceeds a certain size, branch based environments are a big help. When you
Expand All @@ -150,7 +152,7 @@ Apps](https://double-trouble.dev/post/gitlab-review-apps-aws-vite/). This post s
for branch based deployments in the frontend. This is often already enough, but to all the way
with branch based deployments also for the backend, we'll help you out in a future post.
## CI Linting
## Try CI Linting
Last but not least, when tuning the pipeline a lot, it is easy to make a small mistake which will
have the pipeline not working after a push. To prevent at least some surprises here, it's a real
Expand Down

0 comments on commit de475c4

Please sign in to comment.