Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Throttling on arbitrary resources. #46

Closed
wants to merge 89 commits into from

Conversation

james-lawrence
Copy link

Implements throttling for arbitrary resources.
#27

Currently running in our production system with 14k worker processes spread across 15 redis processes.

Created a throttle object model in qless-core which consists of a active job set, a pending job set, and a maximum value. The throttle controls how many jobs that can run simultaneously.

Modified the put command to attempt to throttle new jobs. This is to prevent wasting pops to move jobs into the throttled state.

Modified the pop command to check if all the throttles for the job are available and if not to add it to pending queue for one of the throttles.

Modified the complete, fail, retry commands to release all throttles associated with the job.

When a throttle is released the throttle checks its pending job set and inserts up to (maximum - active) jobs into the waiting queue to be popped.

Let me know of any questions, changes that need to be made etc.


-- Throttle apis
QlessAPI['throttle.set'] = function(now, tid, max, ...)
local expiration = unpack(arg)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why deal with ... and unpack(arg), rather than declaring expiration as an argument?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

it was to make it an optional argument, not terrible strong in lua.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants