-
-
Notifications
You must be signed in to change notification settings - Fork 207
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Unique job #1084
Comments
I think this issue was answered in #531 |
💖 Thank you @sasharevzin for that link! I think that's right; the feature to use is Concurrency Controls: https://github.com/bensheldon/good_job#concurrency-controls And sorry @gagalago for overlooking this when you originally posted 😞 |
you are right but I didn't explain enough my use-case 🙈 something that I want to be able to do is the following: class ExampleJob < ApplicationJob
include GoodJob::ActiveJobExtensions::Concurrency
good_job_control_concurrency_with(
# only one job at the time for the same record, prevent me to do record.lock! in the job
perform_limit: 1,
perform_key: -> { "ExampleJob-#{arguments.first}" },
# only one job with exactly same arguments enqueued at any time
enqueue_limit: 1,
enqueue_key: -> { "ExampleJob-#{arguments.first}-#{arguments.second}" }
)
def perform(record, attributes)
# do something
end
end due to the fact that I need two different key for enqueue and perform. I had to do the before enqueue myself. now that I understand better how is working good_job, what do you think to have the same behavior as of the limit but for the key as I presented in my example? |
oh! I understand now about the desire for multiple keys. The challenge with multiple keys is that the key requires its own indexed column in the database because it's used in the query to count jobs. And it's like, oh, just one more column, but anything that involves a schema migration has a somewhat high bar in my mind. I have been thinking that doing #1095 would make it easy to do multiple concurrency keys because it could use a label like |
I don't know if I will find soon some time to propose that improvement especially because I have a workaround 😇. I keep you posted if I begin to work on that |
I have lot of duplicate job enqueued that take some time to perform. that is a waste of resources to run them all as one them is enough.
I solved it by adding this callback:
I was wondering what you think about my solution? Do you think that we generalize it in a way that we can integrate it into good_job?
ps: some more general alternatives that use redis to work are https://github.com/mhenrixon/sidekiq-unique-jobs or https://github.com/veeqo/activejob-uniqueness . From my understanding my implementation is similar to their
until_executed
.The text was updated successfully, but these errors were encountered: