Replies: 2 comments
-
I am trying to understand your issue so let me ask some questions:
In BullMQ the jobs are distributed according to round-robin so I am not sure what you mean with that the distribution is not even, of course you can have different concurrency factors for every worker and so determine how many jobs can be processed in parallel by every machine.
Why is a worker getting a new job when the code hits await? |
Beta Was this translation helpful? Give feedback.
-
@gramcha Is your issue got resolved? Now I need to migrate all jobs to nodejs based workers. |
Beta Was this translation helpful? Give feedback.
-
Hi Team
We are running bull queue workers on multiple machines with auto-scaling functionality enabled in the cloud(AWS-ECS).
let say I have 5 workers running in docker containers in 5 machines, load distributed like 100%, 20%, 20%, 20%, 20% CPU usage.
The job distribution across machines is not even and multiple jobs are getting served by a single machine and it leads to 100% CPU usage on that machine and time delay increased in job executions irrespective of how many machines we running.
Our jobs having a lot of
async
calls to API services and DB calls and whenever code hitsawait
in a code, a particular worker getting a new job.We are looking for setting limits to job count per worker level to distribute the load evenly across workers.
In rabbit-mq, we can set a pre-fetch count on the consumer side to limit the number of messages getting served by a consumer at a time. Is there similar functionality in BULL? or Any other workaround which could solve this issue?
Beta Was this translation helpful? Give feedback.
All reactions