Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Draft] Improve CSI Snapshotting Performance #6860

Draft
wants to merge 7 commits into
base: main
Choose a base branch
from
Draft
Show file tree
Hide file tree
Changes from 6 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
64 changes: 64 additions & 0 deletions design/improve-csi-snaphot-perf.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,64 @@
# Proposal to improve performance of CSI snapshotting through velero.

- [Proposal to improve performance of CSI snapshotting through velero.]

## Abstract
Currently velero uses the CSI plugin for taking CSI snapshots. The CSI Plugin is modeled as a BIA, where whenever the velero code encounters a PVC, it invokes the PVCAction BIA of CSI Plugin. In the Execute() phase of the plugin the CSI plugin waits for a default 10mins for snapshotting to complete. This is a blocking call and the velero code waits for the snapshotting to complete before proceeding to the next resource. In case of failures due to permissions etc, velero will keep waiting for 10*N minutes. This tracking cannot be made async since we need to ensure the appreance of snapshotHandle on the VolumeSnapshotContent before proceeding. This is because the pre-hooks run on pods first, then PVCs are snapshotted, and then posthooks. Ensuring waiting for actual snapshotting is key here to ensure that the posthooks are not executed before the snapshotting is complete.
Further the Core velero code waits on the CSI Snapshot to have readyToUse=true for 10 minutes.

<!-- ## Background -->

## Goals
- Reduce the time to take CSI snapshots.
Copy link
Contributor

@Lyndon-Li Lyndon-Li Oct 19, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I agree that there are cases make the call in unexpected wait, but I also think they are not totally due to Velero's fault, but mostly on the specific CSI drivers, otherwise, the drivers should detect the cases and fail earlier and then Velero would fail earlier.
Therefore, for this single goal, I don't think a dramatic workflow change in Velero is a good bargain.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we can special case the code to only work for CSI snapshots.
and further we will make it configurable with default parallelism as 1. This won't lead to any drastic workflow change.
If users have CSI drivers which are more performant, they can increase the parallel count.

in yesterday's community meet we even discussed that based on how it plays out here in terms of real world, we can use these parallel approach to other things in the backup /restore flow to enhance velero perf.

- Ensure current behaviour of pre and post hooks in context of PVCs attached to a pod.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pre/post hooks are for the consideration of achieving better data consistency. But ultimately speaking, the consistency is not on pod basis but on application basis.
Therefore, the current solution that having hooks on pod basis is not the ultimate solution.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Currently velero code works that way - at a pod level
I don't see any tracking issue to introduce app level consistency. If that's not even on the roadmap, I'll suggest to decouple it from this perf enhancement


## Non Goals

## Considerations:
- Pass a list of PVCs to CSI Plugin for Approach 1?

## How to Group Snapshots
- PVCs which are being used by a pod/deployment should be snapshotting together.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Practically, any sets of PVCs could be correlated so need to be grouped, not only PVCs from the same pod/deployment.
More accurately, we could say PVCs from one application may need to be snapshotted together, because the group snapshot is actually for helping to achieve data consistency across the application, similar to pre/post hooks. However, there is no real entity for application in Kubernetes, so in order to achieve group snapshot support, more details need to be considered.

Moreover, we also need to consider how to support data movement of grouped snapshots.

So I suggest either we visit more details and create a sophisticated solution for group snapshot or we drop the topic for supporting it in this PR and leave it into a separate PR in future.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

not personally opposed to waiting on the grouping of PVC's but did want to ask one question.

If we have the snapshot that is taken in a consistent way, do we need to change the data movement? I assume that this process could happen at slightly different times because the data itself should be consistent, and the movement of the bytes shouldn't impact the consistency, or am I missing something?

Copy link
Contributor

@Lyndon-Li Lyndon-Li Oct 19, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The data movement doesn't impact the consistency.
On the other hand, the group snapshot does impact data mover on its snapshot expose and manipulation, for example, the snapshot associated objects are different, i.e., VolumeGroupSnapshot and VolumeGroupSnapshotContent, without any changes, the data movement cannot support it.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Oh I agree, once those API's go GA or rather before they go GA or beta we should support those API's

I guess I am confused on how this impacts the current idea.

The way I see this design, and I could be way off, is that when no other information about the system besides that a pod is using multiple PVCs and is using CSI, then we should trigger the CSI backups in a way that leads the consistent data, as well as speed up the backup time.

Is it not a good goal, given the above (ignoring all the work around groups and such) to do this incremental thing to help in multiple ways? @anshulahuja98 is the above understanding what you are going for?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yes correct @shawn-hurley.
We can decouple VolumeSNapshotGroup from this design since it's not even beta yet.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

See these comments, to me the current change won't help to improve the consistency and performance in NORMAL case. At present, under the help of BIAv2, there are two phases for CSI snapshot creation:

  1. Create snapshot and wait snapshot handle appears. This phase is running in sequence among PVCs. If we dig into the process of snapshot creation on the storage side, this process should be very fast under NORMAL case.
  2. Wait for snapshot get to ReadyToUse. For this, data is going to moved from the storage side, so it takes time. This phase is running in parallel

Now, let's see the differences with or without the current changes regarding to consistency and performance improve under NORMAL case:

  • For consistency: I think the idea is having PVCs in one pod to be snapshotted as closer as possible. Because phase 1 is very fast and phase 2 is asynchronous, PVCs from one pod have actually been snapshotted very closely, among them are simple resource backups. If we simply make the backup workflow async as the current changes, I don't know how much more differences it makes. Moreover don't forget the capability of CSI driver, both CSI driver and storage may have limit to run snapshots together.
  • For performance: Since phase 2, the most time consuming part has already been async, I don't know how much performance improvement is there to make main backup workflow async once more

You may argue that the statement of 1 is very fast is not always true for all storages, I agree, but I would rather regard them as the flaw of the storage itself, because technically, this can be very fast.

Anyway, we at least need to do some tests to prove how much improvement for consistency and performance in various environments, then we come back to consider the current changes.
Forgive my caution on these changes, because they actually make the primary workflow very different, there would come many unexpected problems. I have list some, but I cannot tell all. So if we don't know the benefits for sure, I don't think it is a good bargain.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

My understanding is that one of the systems that takes a long time to snapshot is aws-ebs as well as azure disks. I may be recalling wrong.

Can someone verify my recall, I can't find any documentation.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The speed of snapshot-taking is not only related to the storage provider.
It's reported that GCP's big-size persistent disk snapshot handle creation consumes more than 10 minutes, but IMO those are not common cases for most storage providers.

- PVCs which are not being used by any pod can be snapshotted at any time.
- If there are no hooks provided, snapshotting for all PVCs can be done in parallel.

## Approaches

### Approach 1: Add support for VolumeGroupSnapshot in Velero.
- (Volume Group Snapshots)[https://kubernetes.io/blog/2023/05/08/kubernetes-1-27-volume-group-snapshot-alpha/] is introduced as an Alpha feature in Kubernetes v1.27. This feature introduces a Kubernetes API that allows users to take crash consistent snapshots for multiple volumes together. It uses a **label selector to group multiple PersistentVolumeClaims** for snapshotting

### Approach 2: Invoke CSI Plugin in parallel for a group of PVCs.
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Current code sample is based on this.
This is based on the principals of approach 3 and approach 1 can be further implemented on top of current setup.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Some details I can come out, and I think there may be more:

  1. There may be combinations of different backup types. For example, some volumes may be backed up by fs-backup and others by CSI snapshots, and we even don't know which type a volume go before the workflow for that backup type is really launched (considering the opt-in/out options for fs-backup)
  2. Not only pods result in PVC backups, other resource may also do, for example, VirtualMachine resources for KubeVirt or other k8s managed VM solutions. So I am afraid, in order to get all PVCs, PodAction plugin is not enough

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

the aim is not to focus on all PVCs, the scope of current PR is to optimize it for CSI snapshots.
Other scenarios based on perf requirement we can look into.

- Invoke current CSI plugin's Execute() in parallel for a group of PVCs.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think in practice, the idea is to group CSI PVC in a way (for example, group the PVCs per pod or ns, with maximum entries in the group of n), and we backup the PVCs in each group in parallel, regardless it's via the CIS plugin or not.

From the code level, the backupper may evolve to support item_groups, and it will call backupItem parallel for the entries in one group, will that work? This concept of item_groups may help us improve the parallelism in one backup in the future.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If you check the code changes in current PR
it kind of solves for item_group by invoking backupItem in parallel for additionalItems

in future this can be extended to other resources using similar way where we write BIA to group items and then parallel in backup.

- Perf implications of parallel calls?

## Approach 3: Create a Pod BIA Plugin which will invoke CSI Plugin in parallel for a group of PVCs.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For the same consideration of data consistency across application, we could not assume that the pre/post plugin is on pod basis, since we are changing the workflow dramatically, we need to target the ultimate solution, instead of basing on the current situation.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think we even have a tracking issue for application level pre-post.
If that's not even on the near roadmap, I would suggest we take assumption on current flow and optimize it.

If the app consistent option comes in the future we can try to accommodate in a similar way. I am not sure how this is blocking in any way to this current PR.

- Create a Pod BIA Plugin which will invoke CSI Plugin in parallel for a group of PVCs.
- This would lead to code and logic duplication across CSI Plugin and the pod plugin.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We could mitigate the duplication by pulling shared functionality into functions called by both plugins.


## Key challenge to solve in all above approaches.
- How to group PVCs in the core velero flow and further send them for processing.

## Detailed Design
-
## Alternatives Considered

## Security Considerations
No security impact.

## Compatibility

## Implementation
## For Approach 2
- Current code flow `backupItem` in backup.go is invoked for each resource -> this further invokes `itembackupper.backupItem` -> `backupItemInternal`
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Before looking at code, first read through the impl

- Now for a Pod -> First Pre Hooks are run -> Then `executeActions` -> iterate over all BIA applicable on Pod -> which will invoke the `PodAction`
- After all actions are run, `executeActions` gets the additionalItems to backup(PVCs)
- For all these PVCs and other additional items we iterate and call `itembackupper.backupItem`.
- After all additional items are backed up -> control returns to `backupItemInternal` -> Post Hooks are run -> and then `backupItem` returns.
- Here the change we will do is that when backup for additionalItems is done, for PVCs, we will run `itembackupper.backupItem` in an async way.

Overall this will batch all PVCs of 1 pod together. This can even be extended to VolumeSnapshotGroup approach in future.

## Future enhancement

## Open Issues
NA
50 changes: 43 additions & 7 deletions pkg/backup/item_backupper.go
Original file line number Diff line number Diff line change
Expand Up @@ -22,6 +22,7 @@
"encoding/json"
"fmt"
"strings"
"sync"
"time"

"k8s.io/apimachinery/pkg/apis/meta/v1/unstructured"
Expand Down Expand Up @@ -408,7 +409,22 @@
itemOperList := ib.backupRequest.GetItemOperationsList()
*itemOperList = append(*itemOperList, &newOperation)
}

// ## Approach 1
// Extract all PVCs from the Additional Items
// Create a label such as <podname>:<>
// Apply label to all these PVCs
// Create VolumeSnapshotGroup CR with label selector
// Invoke VolumeSnapshotGroup Action with the the CR
// Poll for VSG in CSI VSG Plugin's Execute()
// Return Additional Items and continue backup

// ## Approach 2
// Async call the current CSI Plugin's Execute()
// Wait for all snapshots to complete.
// Max parallelism can be controlled by further tweaking the WaitGroup.
additionalItemFilesChannel := make(chan FileForArchive)
errChannel := make(chan error)
var wg sync.WaitGroup
for _, additionalItem := range additionalItemIdentifiers {
gvr, resource, err := ib.discoveryHelper.ResourceFor(additionalItem.GroupResource.WithVersion(""))
if err != nil {
Expand All @@ -433,12 +449,32 @@
if err != nil {
return nil, itemFiles, errors.WithStack(err)
}

_, additionalItemFiles, err := ib.backupItem(log, item, gvr.GroupResource(), gvr, mustInclude, finalize)
if err != nil {
return nil, itemFiles, err
}
itemFiles = append(itemFiles, additionalItemFiles...)
wg.Add(1)

Check warning on line 452 in pkg/backup/item_backupper.go

View check run for this annotation

Codecov / codecov/patch

pkg/backup/item_backupper.go#L452

Added line #L452 was not covered by tests
go func(additionalItem velero.ResourceIdentifier, log logrus.FieldLogger, item runtime.Unstructured, gvr schema.GroupVersionResource, mustInclude, finalize bool) {
defer wg.Done()
log.WithFields(logrus.Fields{
"groupResource": additionalItem.GroupResource,
"namespace": additionalItem.Namespace,
"name": additionalItem.Name,
}).Infof("Triggering async backupitem for additional item")
_, additionalItemFiles, err := ib.backupItem(log, item, gvr.GroupResource(), gvr, mustInclude, finalize)
if err != nil {
errChannel <- err
return
}
for _, file := range additionalItemFiles {

Check warning on line 465 in pkg/backup/item_backupper.go

View check run for this annotation

Codecov / codecov/patch

pkg/backup/item_backupper.go#L463-L465

Added lines #L463 - L465 were not covered by tests
additionalItemFilesChannel <- file
}
}(additionalItem, log, item.DeepCopy(), gvr, mustInclude, finalize)

Check warning on line 468 in pkg/backup/item_backupper.go

View check run for this annotation

Codecov / codecov/patch

pkg/backup/item_backupper.go#L467-L468

Added lines #L467 - L468 were not covered by tests
}
wg.Wait()
close(additionalItemFilesChannel)
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'll refine the channel stuff / code accuracy later, currently consider this only for representing draft idea on approach.

close(errChannel)
for itemFilesFromChannel := range additionalItemFilesChannel {
itemFiles = append(itemFiles, itemFilesFromChannel)
}
for err := range errChannel {

Check warning on line 476 in pkg/backup/item_backupper.go

View check run for this annotation

Codecov / codecov/patch

pkg/backup/item_backupper.go#L475-L476

Added lines #L475 - L476 were not covered by tests
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this is going to eat some errors

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@shawn-hurley ahh yes, we were talking about this on slack earlier. Before this change, velero backs up each additionalItem in turn, and upon the first error, it's returned as an error, and the others aren't attempted. Now that we're doing them in parallel, all will start, so it's possible that more than one will error out.

Since the failing additionalItem should log an error for its own failure, the full error list shouldn't have anything missing. That being said, rather than just returning the error for the additional item, we probably want a more descriptive error here anyway, since err here doesn't reference the current item at all. Perhaps logic along the lines of this? If errChannel isn't empty return an error with message "One or more additional items for $currentItem failed: (string join of individual err messages from errChannel).

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

"Perhaps logic along the lines of this? If errChannel isn't empty return an error with message "One or more additional items for $currentItem failed: (string join of individual err messages from errChannel)."
I can take care of that.
If that's enough to address this concern

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@anshulahuja98 That addresses the concern on my end. I think that should make sure that no errors are swallowed here. Net effect is if a pod has 2 PVCs and both PVC backups fail, then each PVC's backup error should show up as a PVC error, and then the pod will fail with the combined error message.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't know if it will be reasonable to put them in one message, think about like 5-10 the message will become unwieldy in the logs.

Can we create a single error, like you said, and then log every other error?

Or is that what was proposed and I missed it?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

A go routine could be pulling things off the error channel as it runs too, so you can see the logs as they fail.

Then the error is just if the error handling go routine was used.

Thoughts?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

that works too. Makes sense to me

return nil, itemFiles, err
}
}
return obj, itemFiles, nil
Expand Down
Loading