You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Sometimes worker is stuck on executing email tasks (probably connection problems). To fix this, there is an Celery option TASK_TIME_LIMIT where you can limit task execution time (p.s. default value is infinite). However, when task hits time limit, worker is killed with SIGKILL and what I see from logs, the task is not retried for the second time.
[2020-05-12 11:04:51,352: INFO/MainProcess] Received task: djcelery_email_send_multiple[44dfa3fd-f801-4d8d-b15f-eec05c1e04b4] │
│[2020-05-12 11:05:51,371: ERROR/MainProcess] Task handler raised error: TimeLimitExceeded(60,) │
│Traceback (most recent call last): │
│ File "/opt/venvs/www/lib/python3.5/site-packages/billiard/pool.py", line 658, in on_hard_timeout │
│ raise TimeLimitExceeded(job._timeout) │
│billiard.exceptions.TimeLimitExceeded: TimeLimitExceeded(60,) │
│[2020-05-12 11:05:51,373: ERROR/MainProcess] Hard time limit (60s) exceeded for djcelery_email_send_multiple[44dfa3fd-f801-4d8d-b15f-eec05c1e04b4] │
│[2020-05-12 11:05:51,522: ERROR/MainProcess] Process 'ForkPoolWorker-1' pid:17908 exited with 'signal 9 (SIGKILL)' │
│[2020-05-12 11:09:45,841: INFO/MainProcess] Received task: djcelery_email_send_multiple[399a06d3-629f-42df-a926-9afb76afc98e] │
│[2020-05-12 11:09:47,142: INFO/ForkPoolWorker-2] Task djcelery_email_send_multiple[399a06d3-629f-42df-a926-9afb76afc98e] succeeded in 1.2928511229692958s: 1
Ok, nice. Above is working like a charm. Btw, my error with celery was Cannot reach CELERY_EMAIL_BACKEND django. I guess it would be wise to mention in django-celery-email's docs, that it is recommended to set TASK_SOFT_TIME_LIMIT.
Sometimes worker is stuck on executing email tasks (probably connection problems). To fix this, there is an Celery option
TASK_TIME_LIMIT
where you can limit task execution time (p.s. default value isinfinite
). However, when task hits time limit, worker is killed withSIGKILL
and what I see from logs, the task is not retried for the second time.I am considering to use https://docs.celeryproject.org/en/latest/userguide/configuration.html#task-reject-on-worker-lost, but as docs states, it is probably not the best way to handle this error.
I guess the easiest way would be to handle such error in django-celery-email internally. Or maybe I am missing something?
The text was updated successfully, but these errors were encountered: