Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Throttling on arbitrary resources. #46

Closed
wants to merge 89 commits into from
Closed
Show file tree
Hide file tree
Changes from 73 commits
Commits
Show all changes
89 commits
Select commit Hold shift + click to select a range
a3728f0
simple throttle for qless
Mar 10, 2014
44df91c
finished renaming resources to throttle
Mar 10, 2014
14d0fe3
throttle tests wip
Mar 10, 2014
674353a
test fixes
Mar 10, 2014
32c795c
wip
Mar 10, 2014
4de319b
general working
Mar 10, 2014
493d157
throttle api
Mar 10, 2014
f6b9a38
throttle changes
Mar 10, 2014
36fd0fb
Add Throttle locks and pending member functions
Mar 10, 2014
3599db3
wip
Mar 10, 2014
9f3bdb9
wip
Mar 10, 2014
623ebd9
Fix tests
Mar 11, 2014
b21e519
Switch to sorted set
Mar 11, 2014
69bd3e7
Fix throttle locks and pending member functions
Mar 11, 2014
d3fe766
lock fixes
Mar 11, 2014
1e003e5
finished basic tests
Mar 11, 2014
4df4123
switched to rank instead of score
Mar 11, 2014
7fff784
Acquire throttle on pop
Mar 11, 2014
fe776ef
test fixes
Mar 11, 2014
4fc8e3c
test fixes
Mar 11, 2014
27aebd4
Add tests for dependent throttling
Mar 11, 2014
a3244fe
Use throttles to handle max-queue-concurrency
Mar 11, 2014
81e2103
removed commented code
Mar 11, 2014
b4e7765
support multiple resources
Mar 11, 2014
308a77e
implemented multiple throttles per job
Mar 11, 2014
5d86598
documentation
Mar 12, 2014
5141cbf
Add job tests for multiple throttles
Mar 12, 2014
a27ac2e
implemeted queue throttled
Mar 12, 2014
d230b54
Merge branch 'throttle' of github.com:backupify/qless-core into throttle
Mar 12, 2014
b159933
small changes
Mar 12, 2014
6522fe7
Add tests for dynamically changing throttle concurrency level
Mar 12, 2014
eaf6be1
Add tests to verify queue throttled
Mar 12, 2014
32a1408
Minor optimizations and test fixes
Mar 12, 2014
a70362b
Add test for queue throttled count
Mar 12, 2014
fc332c9
Add api methods for setting queue throttle max
Mar 12, 2014
b54298f
throttles refactor/cleanup
Mar 12, 2014
71ac5e0
Update queue.lua
Mar 12, 2014
aadd902
fixed syntax error
Mar 13, 2014
9595d5e
Simplify job#acquire_throttles
Mar 13, 2014
769d4f3
Remove commented code
Mar 13, 2014
f7a3b6f
misc fixes
Mar 13, 2014
730c668
Merge branch 'throttle' of github.com:backupify/qless-core into throttle
Mar 13, 2014
7d85dc8
attempt to throttle jobs immediately
Mar 13, 2014
2d9810c
added optional expiration to a throttle
Mar 13, 2014
4728f1d
removed printlines
Mar 13, 2014
85350ea
Add API to retrieve throttle ttl
Mar 13, 2014
5668c32
Set queue throttle expiration to 0
Mar 13, 2014
0d8290e
Update test_job.py
Mar 13, 2014
9a37225
Merge pull request #1 from backupify/throttle
Mar 13, 2014
70a2779
Catch when job has no throttles
Mar 13, 2014
dd2d931
test exposing cancel bug
wr0ngway Mar 14, 2014
8f004dc
fix for cancelled
Mar 31, 2014
a9c3b98
wip
Apr 1, 2014
d1b52be
Merge pull request #2 from backupify/fix_cancel
Apr 1, 2014
d7372ab
removed printline statements
Apr 2, 2014
f1bf59f
removed debugging code
Apr 2, 2014
3b80d6c
Merge pull request #3 from backupify/throttle
Apr 11, 2014
3108245
fixes throttle release to properly remove a job from the throttled set
May 12, 2014
d058374
small optimization
May 12, 2014
ee6f039
test name change
May 12, 2014
42aa9be
whitespace fix
May 12, 2014
3bffc21
Merge pull request #4 from backupify/cancel-fix
May 12, 2014
d9903a5
fix tags work
Jun 3, 2014
56df3fc
test fix
Jun 3, 2014
bd089c0
removed unnecessary code
Jun 3, 2014
ed4a0f8
removed unnecessary change
Jun 3, 2014
9a766e6
Merge pull request #5 from backupify/fix-tags
Jun 3, 2014
3550b9b
patches remove_tag method to be more reliable
Jun 4, 2014
d821344
Merge pull request #6 from backupify/patch-remove-tags
Jun 4, 2014
66c1ebd
Merge branch 'upstream-master' into merge-upstream
Aug 8, 2014
3fe1fcd
merged master
Aug 8, 2014
6451b7c
Merge remote-tracking branch 'upstream/master' into merge-upstream
Aug 8, 2014
65dac2d
Merge pull request #7 from backupify/merge-upstream
james-lawrence Aug 18, 2014
cda5ed8
Update throttle.lua
james-lawrence Nov 26, 2014
b6fd1aa
test fixes
May 4, 2015
38e2493
Merge branch 'upstream' into merge-upstream
May 4, 2015
5dbc192
Merge pull request #8 from backupify/merge-upstream
james-lawrence May 4, 2015
21097aa
Merge branch 'upstream-master'
May 4, 2015
7932ac9
Merge branch 'master' of github.com:backupify/qless-core
May 4, 2015
8fda126
retry pop up to config limit when pop quantity would be unfulfilled d…
wr0ngway Jul 23, 2015
98b71d6
short circuit retry if nothing in work queue
wr0ngway Jul 23, 2015
4718824
verify contents of waiting jids in tests for max-pop-retry
wr0ngway Jul 31, 2015
8baa512
Merge pull request #9 from backupify/pop_retry
Jul 31, 2015
9e5e716
Update api.lua
james-lawrence Sep 10, 2015
e8ed50c
Update test_throttle.py
james-lawrence Sep 10, 2015
230a777
tests
Sep 10, 2015
e3fdb7e
Merge pull request #10 from backupify/throttle-release-job-api
james-lawrence Sep 10, 2015
6dbf028
Ignore ghost jids that have no real job
Jul 5, 2016
20dc687
Remove old queue throttle from reput job
Jul 8, 2016
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
6 changes: 3 additions & 3 deletions Makefile
Original file line number Diff line number Diff line change
@@ -1,9 +1,9 @@
all: qless.lua qless-lib.lua

qless-lib.lua: base.lua config.lua job.lua queue.lua recurring.lua worker.lua
qless-lib.lua: base.lua config.lua job.lua queue.lua recurring.lua worker.lua throttle.lua
echo "-- Current SHA: `git rev-parse HEAD`" > qless-lib.lua
echo "-- This is a generated file" >> qless-lib.lua
cat base.lua config.lua job.lua queue.lua recurring.lua worker.lua >> qless-lib.lua
cat base.lua config.lua job.lua queue.lua recurring.lua worker.lua throttle.lua >> qless-lib.lua

qless.lua: qless-lib.lua api.lua
# Cat these files out, but remove all the comments from the source
Expand All @@ -18,4 +18,4 @@ clean:

.PHONY: test
test: qless.lua *.lua
nosetests --exe -v
nosetests --exe -v $(TEST)
42 changes: 41 additions & 1 deletion api.lua
Original file line number Diff line number Diff line change
Expand Up @@ -135,7 +135,7 @@ QlessAPI.unpause = function(now, ...)
end

QlessAPI.cancel = function(now, ...)
return Qless.cancel(unpack(arg))
return Qless.cancel(now, unpack(arg))
end

QlessAPI.timeout = function(now, ...)
Expand Down Expand Up @@ -199,6 +199,46 @@ QlessAPI['queue.forget'] = function(now, ...)
QlessQueue.deregister(unpack(arg))
end

QlessAPI['queue.throttle.get'] = function(now, queue)
local data = Qless.throttle(QlessQueue.ns .. queue):data()
if not data then
return nil
end
return cjson.encode(data)
end

QlessAPI['queue.throttle.set'] = function(now, queue, max)
Qless.throttle(QlessQueue.ns .. queue):set({maximum = max}, 0)
end
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why are there separate queue.throttle.* APIs when there are already throttle.* APIs? As far as I can tell, they aren't used by your qless gem PR...

Anyhow, if we do still need them for some reason, I'd expect them to be implemented in terms of the throttle.* APIs.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

hold over from an original implementation. unnecessary =)


-- Throttle apis
QlessAPI['throttle.set'] = function(now, tid, max, ...)
local expiration = unpack(arg)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why deal with ... and unpack(arg), rather than declaring expiration as an argument?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

it was to make it an optional argument, not terrible strong in lua.

local data = {
maximum = max
}
Qless.throttle(tid):set(data, tonumber(expiration or 0))
end

QlessAPI['throttle.get'] = function(now, tid)
return cjson.encode(Qless.throttle(tid):data())
end

QlessAPI['throttle.delete'] = function(now, tid)
return Qless.throttle(tid):unset()
end

QlessAPI['throttle.locks'] = function(now, tid)
return Qless.throttle(tid).locks.members()
end

QlessAPI['throttle.pending'] = function(now, tid)
return Qless.throttle(tid).pending.members()
end

QlessAPI['throttle.ttl'] = function(now, tid)
return Qless.throttle(tid):ttl()
end
-------------------------------------------------------------------------------
-- Function lookup
-------------------------------------------------------------------------------
Expand Down
114 changes: 79 additions & 35 deletions base.lua
Original file line number Diff line number Diff line change
Expand Up @@ -23,6 +23,12 @@ local QlessJob = {
}
QlessJob.__index = QlessJob

-- throttle forward declaration
local QlessThrottle = {
ns = Qless.ns .. 'th:'
}
QlessThrottle.__index = QlessThrottle

-- RecurringJob forward declaration
local QlessRecurringJob = {}
QlessRecurringJob.__index = QlessRecurringJob
Expand Down Expand Up @@ -61,19 +67,69 @@ function Qless.recurring(jid)
return job
end

-- Return a throttle object
-- throttle objects are used for arbitrary throttling of jobs.
function Qless.throttle(tid)
assert(tid, 'Throttle(): no tid provided')
local throttle = QlessThrottle.data({id = tid})
setmetatable(throttle, QlessThrottle)

-- set of jids which have acquired a lock on this throttle.
throttle.locks = {
length = function()
return (redis.call('zcard', QlessThrottle.ns .. tid .. '-locks') or 0)
end, members = function()
return redis.call('zrange', QlessThrottle.ns .. tid .. '-locks', 0, -1)
end, add = function(...)
if #arg > 0 then
redis.call('zadd', QlessThrottle.ns .. tid .. '-locks', unpack(arg))
end
end, remove = function(...)
if #arg > 0 then
return redis.call('zrem', QlessThrottle.ns .. tid .. '-locks', unpack(arg))
end
end, pop = function(min, max)
return redis.call('zremrangebyrank', QlessThrottle.ns .. tid .. '-locks', min, max)
end, peek = function(min, max)
return redis.call('zrange', QlessThrottle.ns .. tid .. '-locks', min, max)
end
}

-- set of jids which are waiting for the throttle to become available.
throttle.pending = {
length = function()
return (redis.call('zcard', QlessThrottle.ns .. tid .. '-pending') or 0)
end, members = function()
return redis.call('zrange', QlessThrottle.ns .. tid .. '-pending', 0, -1)
end, add = function(now, jid)
redis.call('zadd', QlessThrottle.ns .. tid .. '-pending', now, jid)
end, remove = function(...)
if #arg > 0 then
return redis.call('zrem', QlessThrottle.ns .. tid .. '-pending', unpack(arg))
end
end, pop = function(min, max)
return redis.call('zremrangebyrank', QlessThrottle.ns .. tid .. '-pending', min, max)
end, peek = function(min, max)
return redis.call('zrange', QlessThrottle.ns .. tid .. '-pending', min, max)
end
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

throttle.locks and throttle.pending seem almost identical, except for two things (that I noticed):

  • -locks vs -pending suffix (this makes sense).
  • add can take variadic args for locks but not for pending (Is this an oversite or intentional?)

Given that there is duplication in there definition, and presumably we'll want to keep them in sync, I think it might be worth extracting some kind of factory method that takes the suffix and constructs the table. Thoughts?

On a side note, I don't really get the use of pop and peek here yet...that makes it seem like it's meant to function similar to a queue, where a caller uses pop to get an item if one is available, but I'd expect the caller to always want to add or remove known jids. I'm reading this PR top-to-bottom so maybe the use of these functions will become more clear to me as I keep going down.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

there is also a large duplication between these and the queues themselves. factory for constructing these would be great. I thought about it while I was writing the code but didn't have time to include it.

pop & peek you are correct (I'm fairly sure, its been a long time since I've looked at the code), pretty sure they were used for debugging at some point.


return throttle
end

-- Failed([group, [start, [limit]]])
-- ------------------------------------
-- If no group is provided, this returns a JSON blob of the counts of the
-- various groups of failures known. If a group is provided, it will report up
-- to `limit` from `start` of the jobs affected by that issue.
--
--
-- # If no group, then...
-- {
-- 'group1': 1,
-- 'group2': 5,
-- ...
-- }
--
--
-- # If a group is provided, then...
-- {
-- 'total': 20,
Expand Down Expand Up @@ -119,9 +175,9 @@ end
-------------------------------------------------------------------------------
-- Return all the job ids currently considered to be in the provided state
-- in a particular queue. The response is a list of job ids:
--
--
-- [
-- jid1,
-- jid1,
-- jid2,
-- ...
-- ]
Expand All @@ -146,6 +202,8 @@ function Qless.jobs(now, state, ...)
return queue.locks.peek(now, offset, count)
elseif state == 'stalled' then
return queue.locks.expired(now, offset, count)
elseif state == 'throttled' then
return queue.throttled.peek(now, offset, count)
elseif state == 'scheduled' then
queue:check_scheduled(now, queue.scheduled.length())
return queue.scheduled.peek(now, offset, count)
Expand All @@ -167,7 +225,7 @@ end
-- associated with that id, and 'untrack' stops tracking it. In this context,
-- tracking is nothing more than saving the job to a list of jobs that are
-- considered special.
--
--
-- {
-- 'jobs': [
-- {
Expand Down Expand Up @@ -252,18 +310,17 @@ function Qless.tag(now, command, ...)
tags = cjson.decode(tags)
local _tags = {}
for i,v in ipairs(tags) do _tags[v] = true end

-- Otherwise, add the job to the sorted set with that tags
for i=2,#arg do
local tag = arg[i]
if _tags[tag] == nil then
_tags[tag] = true
table.insert(tags, tag)
end
redis.call('zadd', 'ql:t:' .. tag, now, jid)
redis.call('zincrby', 'ql:tags', 1, tag)
Qless.job(jid):insert_tag(now, tag)
end

redis.call('hset', QlessJob.ns .. jid, 'tags', cjson.encode(tags))
return tags
else
Expand All @@ -278,18 +335,17 @@ function Qless.tag(now, command, ...)
tags = cjson.decode(tags)
local _tags = {}
for i,v in ipairs(tags) do _tags[v] = true end
-- Otherwise, add the job to the sorted set with that tags

-- Otherwise, remove the job from the sorted set with that tags
for i=2,#arg do
local tag = arg[i]
_tags[tag] = nil
redis.call('zrem', 'ql:t:' .. tag, jid)
redis.call('zincrby', 'ql:tags', -1, tag)
Qless.job(jid):remove_tag(tag)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's nice to see some better abstractions being added, rather than re-implementing low-level details in multiple places, so thanks for adding remove_tag :).

end

local results = {}
for i,tag in ipairs(tags) do if _tags[tag] then table.insert(results, tag) end end

redis.call('hset', QlessJob.ns .. jid, 'tags', cjson.encode(results))
return results
else
Expand Down Expand Up @@ -319,7 +375,7 @@ end
-- Cancel a job from taking place. It will be deleted from the system, and any
-- attempts to renew a heartbeat will fail, and any attempts to complete it
-- will fail. If you try to get the data on the object, you will get nothing.
function Qless.cancel(...)
function Qless.cancel(now, ...)
-- Dependents is a mapping of a job to its dependent jids
local dependents = {}
for _, jid in ipairs(arg) do
Expand Down Expand Up @@ -365,22 +421,20 @@ function Qless.cancel(...)
-- Remove it from that queue
if queue then
local queue = Qless.queue(queue)
queue.work.remove(jid)
queue.locks.remove(jid)
queue.scheduled.remove(jid)
queue.depends.remove(jid)
queue:remove_job(jid)
end

local job = Qless.job(jid)

job:throttles_release(now)

-- We should probably go through all our dependencies and remove
-- ourselves from the list of dependents
for i, j in ipairs(redis.call(
'smembers', QlessJob.ns .. jid .. '-dependencies')) do
redis.call('srem', QlessJob.ns .. j .. '-dependents', jid)
end

-- Delete any notion of dependencies it has
redis.call('del', QlessJob.ns .. jid .. '-dependencies')

-- If we're in the failed state, remove all of our data
if state == 'failed' then
failure = cjson.decode(failure)
Expand All @@ -398,25 +452,15 @@ function Qless.cancel(...)
'ql:s:stats:' .. bin .. ':' .. queue, 'failed', failed - 1)
end

-- Remove it as a job that's tagged with this particular tag
local tags = cjson.decode(
redis.call('hget', QlessJob.ns .. jid, 'tags') or '{}')
for i, tag in ipairs(tags) do
redis.call('zrem', 'ql:t:' .. tag, jid)
redis.call('zincrby', 'ql:tags', -1, tag)
end
job:delete()
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

👍 Again, it's good to see this extracted into a helper function.


-- If the job was being tracked, we should notify
if redis.call('zscore', 'ql:tracked', jid) ~= false then
Qless.publish('canceled', jid)
end

-- Just go ahead and delete our data
redis.call('del', QlessJob.ns .. jid)
redis.call('del', QlessJob.ns .. jid .. '-history')
end
end

return arg
end

Loading