Introduces a job that runs the AMPF sync on a per storage basis #15979
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Why?
We run a long job with that syncs all the
Storages::Storage
that have automatically managed folders enabled with its provider.This job is triggered basically all the time and at a schedule. Project Created, run it. Member added somewhere, run it. Sneezed in a funny manner, run it.
This leads to a lot of wasted resources as we are doing work that probably isn't required.
What?
This PR aims to reduce the processing time and wasted resources by splitting the work on a per storage basis.
This would still avoid some race conditions while giving us slightly more flexibility when dealing with the synchronization process.
How?
First all the logic from
Storages::ManageIntegrationJob
was moved toStorages::AutomaticallyManagedStorageSyncJob
.This new job works as mentioned above: on a single
Storages::Storage
and has some concurrency controls over it allowing at most 2 jobs for each storage (1 running/1 on queue).This new job was then hooked to the
Member
s,Project
s andProjectStorage
s events, so that it queues a job only for the relevantStorages::Storage
, if any.The original job then will only need to queue this new one for each storage to keep its original functionality and benefit from the granularity.