Skip to content

Commit

Permalink
Fixed some spelling and grammar
Browse files Browse the repository at this point in the history
ran through MS Word for good measure too
  • Loading branch information
saintdle authored Apr 20, 2022
1 parent 6813102 commit 6cf47d2
Showing 1 changed file with 31 additions and 31 deletions.
62 changes: 31 additions & 31 deletions Days/day89.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,17 +9,17 @@ id: 1048718
---
## Disaster Recovery

We have mentioned already how different failure scenarios will warrant different recovery requirements. When it comes to Fire, Flood and Blood scenarios we can consider these mostly disaster situations where we might need our worklods up and running in a completely different location as fast as possible or at least with near-zero recovery time objectives (RTO)
We have mentioned already how different failure scenarios will warrant different recovery requirements. When it comes to Fire, Flood and Blood scenarios we can consider these mostly disaster situations where we might need our workloads up and running in a completely different location as fast as possible or at least with near-zero recovery time objectives (RTO).

This can only be achieved at scale when you automate the replication of the complete application stack to a standby environment.

This allows for fast failovers across cloud regions, cloud providers or between on-premises and cloud infrastructure.

Keeping with the theme so far we are going to concentrate on how this can be achieved using Kasten K10 using our minikube cluster that we deployed and configured a few sessions ago.
Keeping with the theme so far, we are going to concentrate on how this can be achieved using Kasten K10 using our minikube cluster that we deployed and configured a few sessions ago.

We will then create another minikube cluster with Kasten K10 also installed to act as our standby cluster which in theory could be any location.

Kasten K10 also has built in functionality to ensure if something was to happen to the Kubernetes cluster it is running on that the catalog data is replicated and available in a new one [K10 Disaster Recovery](https://docs.kasten.io/latest/operating/dr.html)
Kasten K10 also has built in functionality to ensure if something was to happen to the Kubernetes cluster it is running on that the catalog data is replicated and available in a new one [K10 Disaster Recovery](https://docs.kasten.io/latest/operating/dr.html).

### Add object storage to K10

Expand All @@ -29,15 +29,15 @@ I have cleaned out the S3 bucket that we created for the Kanister demo in the la

![](Images/Day89_Data1.png)

Port forward to access the K10 dashboard, open a new terminal to run the below command
Port forward to access the K10 dashboard, open a new terminal to run the below command:

`kubectl --namespace kasten-io port-forward service/gateway 8080:8000`

The Kasten dashboard will be available at: `http://127.0.0.1:8080/k10/#/`

![](Images/Day87_Data4.png)

To authenticate with the dashboard we now need the token which we can get with the following commands.
To authenticate with the dashboard, we now need the token which we can get with the following commands.

```
TOKEN_NAME=$(kubectl get secret --namespace kasten-io|grep k10-k10-token | cut -d " " -f 1)
Expand All @@ -57,80 +57,80 @@ Then we get access to the Kasten K10 dashboard.

![](Images/Day87_Data7.png)

Now that we are back in the Kasten K10 dashboard we can add our location profile, select "Settings" at the top of the page and "New Profile"
Now that we are back in the Kasten K10 dashboard we can add our location profile, select "Settings" at the top of the page and "New Profile".

![](Images/Day89_Data2.png)

You can see from the image below that we have choice when it comes to where this location profile is, we are going to select Amazon S3 and we are going to add our sensitive access credentials, region and bucket name.
You can see from the image below that we have choice when it comes to where this location profile is, we are going to select Amazon S3, and we are going to add our sensitive access credentials, region and bucket name.

![](Images/Day89_Data3.png)

If we scroll down on the New Profile creation window you will see we also have the ability to enable immutable backups which leverages the S3 Object Lock API. For this demo we won't be using that.
If we scroll down on the New Profile creation window you will see, we also have the ability to enable immutable backups which leverages the S3 Object Lock API. For this demo we won't be using that.

![](Images/Day89_Data4.png)

Hit "Save Profile" and you can now see our newly created or added location profile as per below.

![](Images/Day89_Data5.png)

### Create a policy to protect pacman app to object storage
### Create a policy to protect Pac-Man app to object storage

In the previous session we created only an adhoc snapshot of our pacman application, therefore we need to create a backup policy that will send our application backups to our newly created object storage location.
In the previous session we created only an ad-hoc snapshot of our Pac-Man application, therefore we need to create a backup policy that will send our application backups to our newly created object storage location.

If you head back to the dashboard and select the Policy card you will see a screen as per below. Select "Create New Policy"
If you head back to the dashboard and select the Policy card you will see a screen as per below. Select "Create New Policy".

![](Images/Day89_Data6.png)

First of all we can give our policy a useful name and description. We can also define our backup frequency for demo purposes I am using on-demand.
First, we can give our policy a useful name and description. We can also define our backup frequency for demo purposes I am using on-demand.

![](Images/Day89_Data7.png)

Next we want to enable backups via Snapshot exports meaning that we want to send our data out to our location profile. If you have multiple you can select which one you would like to send your backups to.
Next, we want to enable backups via Snapshot exports meaning that we want to send our data out to our location profile. If you have multiple you can select which one you would like to send your backups to.

![](Images/Day89_Data8.png)

Next, we select the application by either name or labels, I am going to choose by name and all resources.

![](Images/Day89_Data9.png)

Under Advanced settings we are not going to be using any of these but based on our walkthrough of Kanister yesterday we can leverage Kanister as part of Kasten K10 as well to take those application consistent copies of our data.
Under Advanced settings we are not going to be using any of these but based on our [walkthrough of Kanister yesterday](https://github.com/MichaelCade/90DaysOfDevOps/blob/main/Days/day88.md), we can leverage Kanister as part of Kasten K10 as well to take those application consistent copies of our data.

![](Images/Day89_Data10.png)

Finally select "Create Policy" and you will now see the policy in our Policy window.

![](Images/Day89_Data11.png)

At the bottom of the created policy you will have "Show import details" we need this string to be able to import into our standby cluster. Copy this somewhere safe for now.
At the bottom of the created policy, you will have "Show import details" we need this string to be able to import into our standby cluster. Copy this somewhere safe for now.

![](Images/Day89_Data12.png)

Before we move on we just need to select "run once" to get a backup sent our object storage bucket.
Before we move on, we just need to select "run once" to get a backup sent our object storage bucket.

![](Images/Day89_Data13.png)

Just to show the successful backup and export of our data.
Below, the screenshot is just to show the successful backup and export of our data.

![](Images/Day89_Data14.png)


### Create a new minikube cluster & deploy K10
### Create a new MiniKube cluster & deploy K10

We then need to deploy a second Kubernetes cluster and where this could be any supported version of Kubernetes including OpenShift for the purpose of education we will use the very free version of minikube with a different name.
We then need to deploy a second Kubernetes cluster and where this could be any supported version of Kubernetes including OpenShift, for the purpose of education we will use the very free version of MiniKube with a different name.

Using `minikube start --addons volumesnapshots,csi-hostpath-driver --apiserver-port=6443 --container-runtime=containerd -p standby --kubernetes-version=1.21.2` we can create our new cluster.

![](Images/Day89_Data15.png)

We then can deploy Kasten K10 in this cluster using
We then can deploy Kasten K10 in this cluster using:

`helm install k10 kasten/k10 --namespace=kasten-io --set auth.tokenAuth.enabled=true --set injectKanisterSidecar.enabled=true --set-string injectKanisterSidecar.namespaceSelector.matchLabels.k10/injectKanisterSidecar=true --create-namespace`

This will take a while but in the meantime we can use `kubectl get pods -n kasten-io -w` to watch the progress of our pods getting to the running status.
This will take a while but in the meantime, we can use `kubectl get pods -n kasten-io -w` to watch the progress of our pods getting to the running status.

It is worth noting that because we are using minikube our application will just run when we run our import policy, our storageclass is the same on this standby cluster. However something we will cover in the final session is about mobility and transformation.
It is worth noting that because we are using MiniKube our application will just run when we run our import policy, our storageclass is the same on this standby cluster. However, something we will cover in the final session is about mobility and transformation.

When the pods are up and running we can follow the steps we went through on the previous steps in the other cluster.
When the pods are up and running, we can follow the steps we went through on the previous steps in the other cluster.

Port forward to access the K10 dashboard, open a new terminal to run the below command

Expand All @@ -140,7 +140,7 @@ The Kasten dashboard will be available at: `http://127.0.0.1:8080/k10/#/`

![](Images/Day87_Data4.png)

To authenticate with the dashboard we now need the token which we can get with the following commands.
To authenticate with the dashboard, we now need the token which we can get with the following commands.

```
TOKEN_NAME=$(kubectl get secret --namespace kasten-io|grep k10-k10-token | cut -d " " -f 1)
Expand All @@ -160,19 +160,19 @@ Then we get access to the Kasten K10 dashboard.

![](Images/Day87_Data7.png)

### Import Pacman into new minikube cluster
### Import Pac-Man into new the MiniKube cluster

At this point we are now able to create an import policy in that standby cluster and connect to the object storage backups and determine what and how we want this to look.

First we add in our Location Profile that we walked through earlier on the other cluster, showing off dark mode here to show the difference between our production system and our DR standby location.
First, we add in our Location Profile that we walked through earlier on the other cluster, showing off dark mode here to show the difference between our production system and our DR standby location.

![](Images/Day89_Data16.png)

Now we go back to the dashboard and into the policies tab to create a new policy.

![](Images/Day89_Data17.png)

Create the import policy as per the below image. When complete, we can create policy. There are options here to restore after import and some people might want this option, this will go and restore into our standby cluster on completion. We also have the ability to change what that looks like and the plan will be show this in [Day 90](day90.md).
Create the import policy as per the below image. When complete, we can create policy. There are options here to restore after import and some people might want this option, this will go and restore into our standby cluster on completion. We also have the ability to change the configuration of the application as it is restored and this is what I have documented in [Day 90](day90.md).

![](Images/Day89_Data18.png)

Expand All @@ -188,7 +188,7 @@ If we now head back to the dashboard and into the Applications card, we can then

![](Images/Day89_Data21.png)

Here we can see the restore points we have available to us, this was the backup job that we ran on the primary cluster against our PacMan application.
Here we can see the restore points we have available to us; this was the backup job that we ran on the primary cluster against our Pac-Man application.

![](Images/Day89_Data22.png)

Expand All @@ -200,15 +200,15 @@ When you hit "Restore" it will prompt you with a confirmation.

![](Images/Day89_Data24.png)

We can see below that we are in the standby cluster and if we check on our pods we can see that we have our running application.
We can see below that we are in the standby cluster and if we check on our pods, we can see that we have our running application.

![](Images/Day89_Data25.png)

We can then port forward (in reality and real life you would not need this step to access the application, you would be using ingress)
We can then port forward (in real life/production environments, you would not need this step to access the application, you would be using ingress)

![](Images/Day89_Data26.png)

Next we will take a look at Application mobility and transformation.
Next, we will take a look at Application mobility and transformation.

## Resources

Expand Down

0 comments on commit 6cf47d2

Please sign in to comment.