Skip to content

Commit

Permalink
Merge pull request #156 from blackpiglet/cherry_pick_cross_project_to…
Browse files Browse the repository at this point in the history
…_1.11

Cherry pick cross project to 1.11
  • Loading branch information
ywk253100 authored Sep 20, 2023
2 parents b24ffe1 + 89e8abe commit e33d0e9
Show file tree
Hide file tree
Showing 5 changed files with 238 additions and 34 deletions.
3 changes: 2 additions & 1 deletion examples/README.md
Original file line number Diff line number Diff line change
@@ -1,3 +1,4 @@
# Examples

- [Restore snapshots from GCP across projects](./gcp-projects.md)
- [Backup at project B, and restore at project A](./backup_at_b_restore_at_a.md)
- [Velero at project A, backup and restore at other projects](./velero_at_a_br_at_other.md)
Original file line number Diff line number Diff line change
@@ -1,11 +1,11 @@
# Restore snapshots from GCP across projects
# Backup at project B, and restore at project A

These steps are heavily inspired by the [gcp documentation](https://cloud.google.com/compute/docs/images/sharing-images-across-projects).

Assume the following...

- Project A [project-a]: GCP project we want to restore TO
- Project B [project-b]: GCP Project we want to restore FROM
- Project B [project-b]: GCP Project we want to backup FROM

The steps below assume that you have not setup Velero yet. So make sure to skip any steps you've already completed.

Expand All @@ -26,15 +26,17 @@ The steps below assume that you have not setup Velero yet. So make sure to skip
- Assign [sa-b] "Storage Object Admin" permissions to [bucket-b]
- Install velero on the k8s cluster in this project with configs
- credentials: [sa-b]
- snapshotlocation: projectid=[project-b] and bucket=[bucket-b]
- snapshotlocation: projectid=[project-b]
- bucket: [bucket-b]
- Create velero backup with the pvc snapshots desired [backup-b]

- In [project-a]

- NOTE: Make sure to disable any scheduled backups.
- Install velero on the k8s cluster in this project with configs
- credentials: [sa-a]
- snapshotlocation: projectid=[project-b] and bucket=[bucket-b]
- snapshotlocation: projectid=[project-b]
- bucket: [bucket-b]
- Create velero restore [restore-a] from [backup-b]

If all was setup correctly, PVCs should be created from [project-b] snapshots.
Expand Down
51 changes: 51 additions & 0 deletions examples/velero_at_a_br_at_other.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,51 @@
# Velero at project A, backup and restore at other projects

This scenario is introduced in [issue 4806](https://github.com/vmware-tanzu/velero/issues/4806).

Assume the following...

- Project A [project-a]: The project where the Velero's service account is located, and the Velero service account is granted to have enough permission to do backup and restore in the other projects.
- Project B [project-b]: The GCP project we want to restore TO.
- Project C [project-c]: The GCP project we want to backup FROM.

## Set up Velero with permission in projects
* In **project-a**
* Create "Velero Server" IAM role **role-a** with required role permissions.
* Create ServiceAccount **sa-a**.
* Assign **sa-a** with **role-a**.
* Assign **sa-a** with **role-b**(need to run after role-b created in project-b).
* Assign **sa-a** with **role-c**(need to run after role-c created in project-c).
* Create a bucket **bucket-a**.
* Assign [sa-a] "Storage Object Admin" permissions to [bucket-a]
* Assign [sa-b] "Storage Object Admin" permissions to [bucket-a](need to run after sa-b created in project-b)
* Assign [sa-c] "Storage Object Admin" permissions to [bucket-a](need to run after sa-c created in project-c)


* In **project-b**
* Add the ServiceAccount **sa-a** into project **project-b** according to [Granting service accounts access to your projects](https://cloud.google.com/marketplace/docs/grant-service-account-access).
* Create ServiceAccount **sa-b**.
* Create "Velero Server" IAM role **role-b** with required role permissions.
* Assign **sa-b** with **role-b**.

* In **project-c**
* Add the ServiceAccount **sa-a** into project **project-c** according to [Granting service accounts access to your projects](https://cloud.google.com/marketplace/docs/grant-service-account-access).
* Create ServiceAccount **sa-c**.
* Create "Velero Server" IAM role **role-c** with required role permissions.
* Assign **sa-c** with **role-c**.

## Backup at project C
* In **project-c**
* Install Velero on the k8s cluster in this project with configurations:
* SecretFile: **sa-a**
* SnapshotLocation: project=**project-a** and volumeProject=**project-c**
* Bucket: **bucket-a**
* Create Velero backup **backup-c** with the PVC snapshots desired.

## Restore at project B
* In **project-b**
* NOTE: Make sure to disable any scheduled backups.
* Install Velero on the k8s cluster in this project with configurations
* SecretFile: **sa-a**
* SnapshotLocation: project=**project-a** and volumeProject=**project-b**
* Bucket: **bucket-a**
* Create Velero restore **restore-b** from backup **backup-c**
38 changes: 31 additions & 7 deletions velero-plugin-for-gcp/volume_snapshotter.go
Original file line number Diff line number Diff line change
Expand Up @@ -44,6 +44,7 @@ const (
projectKey = "project"
snapshotLocationKey = "snapshotLocation"
pdCSIDriver = "pd.csi.storage.gke.io"
volumeProjectKey = "volumeProject"
)

var pdVolRegexp = regexp.MustCompile(`^projects\/[^\/]+\/(zones|regions)\/[^\/]+\/disks\/[^\/]+$`)
Expand All @@ -61,7 +62,8 @@ func newVolumeSnapshotter(logger logrus.FieldLogger) *VolumeSnapshotter {
}

func (b *VolumeSnapshotter) Init(config map[string]string) error {
if err := veleroplugin.ValidateVolumeSnapshotterConfigKeys(config, snapshotLocationKey, projectKey, credentialsFileConfigKey); err != nil {
if err := veleroplugin.ValidateVolumeSnapshotterConfigKeys(config,
snapshotLocationKey, projectKey, credentialsFileConfigKey, volumeProjectKey); err != nil {
return err
}

Expand Down Expand Up @@ -98,7 +100,7 @@ func (b *VolumeSnapshotter) Init(config map[string]string) error {

b.snapshotLocation = config[snapshotLocationKey]

b.volumeProject = config[projectKey]
b.volumeProject = config[volumeProjectKey]
if b.volumeProject == "" {
b.volumeProject = creds.ProjectID
}
Expand Down Expand Up @@ -131,15 +133,18 @@ func isMultiZone(volumeAZ string) bool {
// parseRegion parses a failure-domain tag with multiple zones
// and returns a single region. Zones are sperated by double underscores (__).
// For example
// input: us-central1-a__us-central1-b
// return: us-central1
//
// input: us-central1-a__us-central1-b
// return: us-central1
//
// When a custom storage class spans multiple geographical zones,
// such as us-central1 and us-west1 only the zone matching the cluster is used
// in the failure-domain tag.
// For example
// Cluster nodes in us-central1-c, us-central1-f
// Storage class zones us-central1-a, us-central1-f, us-east1-a, us-east1-d
// The failure-domain tag would be: us-central1-a__us-central1-f
//
// Cluster nodes in us-central1-c, us-central1-f
// Storage class zones us-central1-a, us-central1-f, us-east1-a, us-east1-d
// The failure-domain tag would be: us-central1-a__us-central1-f
func parseRegion(volumeAZ string) (string, error) {
zones := strings.Split(volumeAZ, zoneSeparator)
zone := zones[0]
Expand Down Expand Up @@ -411,6 +416,10 @@ func (b *VolumeSnapshotter) SetVolumeID(unstructuredPV runtime.Unstructured, vol
return nil, fmt.Errorf("invalid volumeHandle for restore with CSI driver:%s, expected projects/{project}/zones/{zone}/disks/{name}, got %s",
pdCSIDriver, handle)
}
if b.IsVolumeCreatedCrossProjects(handle) == true {
projectRE := regexp.MustCompile(`projects\/[^\/]+\/`)
handle = projectRE.ReplaceAllString(handle, "projects/"+b.volumeProject+"/")
}
pv.Spec.CSI.VolumeHandle = handle[:strings.LastIndex(handle, "/")+1] + volumeID
} else {
return nil, fmt.Errorf("unable to handle CSI driver: %s", driver)
Expand All @@ -428,3 +437,18 @@ func (b *VolumeSnapshotter) SetVolumeID(unstructuredPV runtime.Unstructured, vol

return &unstructured.Unstructured{Object: res}, nil
}

func (b *VolumeSnapshotter) IsVolumeCreatedCrossProjects(volumeHandle string) bool {
// Get project ID from volume handle
parsedStr := strings.Split(volumeHandle, "/")
if len(parsedStr) < 2 {
return false
}
projectID := parsedStr[1]

if projectID != b.volumeProject {
return true
}

return false
}
170 changes: 148 additions & 22 deletions velero-plugin-for-gcp/volume_snapshotter_test.go
Original file line number Diff line number Diff line change
Expand Up @@ -18,7 +18,7 @@ package main

import (
"encoding/json"
"strings"
"os"
"testing"

"github.com/pkg/errors"
Expand Down Expand Up @@ -155,15 +155,13 @@ func TestSetVolumeID(t *testing.T) {
}

func TestSetVolumeIDForCSI(t *testing.T) {
b := &VolumeSnapshotter{
log: logrus.New(),
}

cases := []struct {
name string
csiJSON string
volumeID string
wantErr bool
name string
csiJSON string
volumeID string
wantErr bool
volumeProject string
wantedVolumeID string
}{
{
name: "set ID to CSI with GKE pd CSI driver",
Expand All @@ -172,8 +170,10 @@ func TestSetVolumeIDForCSI(t *testing.T) {
"fsType": "ext4",
"volumeHandle": "projects/velero-gcp/zones/us-central1-f/disks/pvc-a970184f-6cc1-4769-85ad-61dcaf8bf51d"
}`,
volumeID: "restore-fd9729b5-868b-4544-9568-1c5d9121dabc",
wantErr: false,
volumeID: "restore-fd9729b5-868b-4544-9568-1c5d9121dabc",
wantErr: false,
volumeProject: "velero-gcp",
wantedVolumeID: "projects/velero-gcp/zones/us-central1-f/disks/restore-fd9729b5-868b-4544-9568-1c5d9121dabc",
},
{
name: "set ID to CSI with GKE pd CSI driver, but the volumeHandle is invalid",
Expand All @@ -182,22 +182,41 @@ func TestSetVolumeIDForCSI(t *testing.T) {
"fsType": "ext4",
"volumeHandle": "pvc-a970184f-6cc1-4769-85ad-61dcaf8bf51d"
}`,
volumeID: "restore-fd9729b5-868b-4544-9568-1c5d9121dabc",
wantErr: true,
volumeID: "restore-fd9729b5-868b-4544-9568-1c5d9121dabc",
wantErr: true,
volumeProject: "velero-gcp",
},
{
name: "set ID to CSI with unknown driver",
csiJSON: `"{
csiJSON: `{
"driver": "xxx.csi.storage.gke.io",
"fsType": "ext4",
"volumeHandle": "projects/velero-gcp/zones/us-central1-f/disks/pvc-a970184f-6cc1-4769-85ad-61dcaf8bf51d"
}`,
volumeID: "restore-fd9729b5-868b-4544-9568-1c5d9121dabc",
wantErr: true,
volumeID: "restore-fd9729b5-868b-4544-9568-1c5d9121dabc",
wantErr: true,
volumeProject: "velero-gcp",
},
{
name: "volume project is different from original handle project",
csiJSON: `{
"driver": "pd.csi.storage.gke.io",
"fsType": "ext4",
"volumeHandle": "projects/velero-gcp/zones/us-central1-f/disks/pvc-a970184f-6cc1-4769-85ad-61dcaf8bf51d"
}`,
volumeID: "restore-fd9729b5-868b-4544-9568-1c5d9121dabc",
wantErr: false,
volumeProject: "velero-gcp-2",
wantedVolumeID: "projects/velero-gcp-2/zones/us-central1-f/disks/restore-fd9729b5-868b-4544-9568-1c5d9121dabc",
},
}
for _, tt := range cases {
t.Run(tt.name, func(t *testing.T) {
b := &VolumeSnapshotter{
log: logrus.New(),
volumeProject: tt.volumeProject,
}

res := &unstructured.Unstructured{
Object: map[string]interface{}{},
}
Expand All @@ -206,17 +225,16 @@ func TestSetVolumeIDForCSI(t *testing.T) {
res.Object["spec"] = map[string]interface{}{
"csi": csi,
}
originalVolHanle, _ := csi["volumeHandle"].(string)
newRes, err := b.SetVolumeID(res, tt.volumeID)
if tt.wantErr {
assert.Error(t, err)
require.Error(t, err)
} else {
assert.NoError(t, err)
require.NoError(t, err)
newPV := new(v1.PersistentVolume)
require.NoError(t, runtime.DefaultUnstructuredConverter.FromUnstructured(newRes.UnstructuredContent(), newPV))
ind := strings.LastIndex(newPV.Spec.CSI.VolumeHandle, "/")
assert.Equal(t, tt.volumeID, newPV.Spec.CSI.VolumeHandle[ind+1:])
assert.Equal(t, originalVolHanle[:ind], newPV.Spec.CSI.VolumeHandle[:ind])
if tt.wantedVolumeID != "" {
require.Equal(t, tt.wantedVolumeID, newPV.Spec.CSI.VolumeHandle)
}
}
})
}
Expand Down Expand Up @@ -354,3 +372,111 @@ func TestRegionHelpers(t *testing.T) {
})
}
}

func TestInit(t *testing.T) {
credential_file_name := "./credential_file"
default_credential_file_name := "./default_credential"
os.Setenv("GOOGLE_APPLICATION_CREDENTIALS", default_credential_file_name)
credential_content := `{"type": "service_account","project_id": "project-a","private_key_id":"id","private_key":"key","client_email":"[email protected]","client_id":"id","auth_uri":"uri","token_uri":"uri","auth_provider_x509_cert_url":"url","client_x509_cert_url":"url"}`
f, err := os.Create(credential_file_name)
require.NoError(t, err)
_, err = f.Write([]byte(credential_content))
require.NoError(t, err)

f, err = os.Create(default_credential_file_name)
require.NoError(t, err)
_, err = f.Write([]byte(credential_content))
require.NoError(t, err)

tests := []struct {
name string
config map[string]string
expectedVolumeSnapshotter VolumeSnapshotter
}{
{
name: "Init with Credential files.",
config: map[string]string{
"project": "project-a",
"credentialsFile": credential_file_name,
"snapshotLocation": "default",
"volumeProject": "project-b",
},
expectedVolumeSnapshotter: VolumeSnapshotter{
snapshotLocation: "default",
volumeProject: "project-b",
snapshotProject: "project-a",
},
},
{
name: "Init without Credential files.",
config: map[string]string{
"project": "project-a",
"snapshotLocation": "default",
"volumeProject": "project-b",
},
expectedVolumeSnapshotter: VolumeSnapshotter{
snapshotLocation: "default",
volumeProject: "project-b",
snapshotProject: "project-a",
},
},
}

for _, test := range tests {
t.Run(test.name, func(t *testing.T) {
volumeSnapshotter := newVolumeSnapshotter(logrus.StandardLogger())
err := volumeSnapshotter.Init(test.config)
require.NoError(t, err)
require.Equal(t, test.expectedVolumeSnapshotter.snapshotLocation, volumeSnapshotter.snapshotLocation)
require.Equal(t, test.expectedVolumeSnapshotter.volumeProject, volumeSnapshotter.volumeProject)
require.Equal(t, test.expectedVolumeSnapshotter.snapshotProject, volumeSnapshotter.snapshotProject)
})
}

err = os.Remove(credential_file_name)
require.NoError(t, err)
err = os.Remove(default_credential_file_name)
require.NoError(t, err)
}

func TestIsVolumeCreatedCrossProjects(t *testing.T) {
tests := []struct {
name string
volumeSnapshotter VolumeSnapshotter
volumeHandle string
expectedResult bool
}{
{
name: "Invalid Volume handle",
volumeSnapshotter: VolumeSnapshotter{
log: logrus.New(),
},
volumeHandle: "InvalidHandle",
expectedResult: false,
},
{
name: "Volume is created cross-project",
volumeSnapshotter: VolumeSnapshotter{
log: logrus.New(),
volumeProject: "velero-gcp-2",
},
volumeHandle: "projects/velero-gcp/zones/us-central1-f/disks/pvc-a970184f-6cc1-4769-85ad-61dcaf8bf51d",
expectedResult: true,
},
{
name: "Volume is not created cross-project",
volumeSnapshotter: VolumeSnapshotter{
log: logrus.New(),
volumeProject: "velero-gcp",
},
volumeHandle: "projects/velero-gcp/zones/us-central1-f/disks/pvc-a970184f-6cc1-4769-85ad-61dcaf8bf51d",
expectedResult: false,
},
}

for _, test := range tests {
t.Run(test.name, func(t *testing.T) {
require.Equal(t, test.expectedResult, test.volumeSnapshotter.IsVolumeCreatedCrossProjects(test.volumeHandle))
})
}
}

0 comments on commit e33d0e9

Please sign in to comment.