You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I have a fairly specific use case for good_job and would like to ask if anyone has an idea for handling this. Perhaps there's something obvious that I am missing.
We have a job type that is fairly memory hungry, so we limit the number of jobs that can be performed concurrently to 2, using the concurrency perform_limit. We also need to make sure that duplicate jobs cannot be run for this job type - based on an account_id that is passed as an argument to the job.
Is there a clean way to limit number of jobs running at once for a particular job class, whilst also ensuring that the same job cannot be run simultaneously for the same account?
Right now I am using a manual SQL check on the good_jobs table with before_enqueue which is not ideal at all.
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
-
Hi all 👋
I have a fairly specific use case for good_job and would like to ask if anyone has an idea for handling this. Perhaps there's something obvious that I am missing.
We have a job type that is fairly memory hungry, so we limit the number of jobs that can be performed concurrently to 2, using the concurrency perform_limit. We also need to make sure that duplicate jobs cannot be run for this job type - based on an account_id that is passed as an argument to the job.
Is there a clean way to limit number of jobs running at once for a particular job class, whilst also ensuring that the same job cannot be run simultaneously for the same account?
Right now I am using a manual SQL check on the good_jobs table with
before_enqueue
which is not ideal at all.Beta Was this translation helpful? Give feedback.
All reactions