Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Retry on TimeLimitExceeded #73

Open
adi- opened this issue May 12, 2020 · 2 comments
Open

Retry on TimeLimitExceeded #73

adi- opened this issue May 12, 2020 · 2 comments

Comments

@adi-
Copy link

adi- commented May 12, 2020

Sometimes worker is stuck on executing email tasks (probably connection problems). To fix this, there is an Celery option TASK_TIME_LIMIT where you can limit task execution time (p.s. default value is infinite). However, when task hits time limit, worker is killed with SIGKILL and what I see from logs, the task is not retried for the second time.

[2020-05-12 11:04:51,352: INFO/MainProcess] Received task: djcelery_email_send_multiple[44dfa3fd-f801-4d8d-b15f-eec05c1e04b4]                                 │
│[2020-05-12 11:05:51,371: ERROR/MainProcess] Task handler raised error: TimeLimitExceeded(60,)                                                                │
│Traceback (most recent call last):                                                                                                                            │
│  File "/opt/venvs/www/lib/python3.5/site-packages/billiard/pool.py", line 658, in on_hard_timeout                                                      │
│    raise TimeLimitExceeded(job._timeout)                                                                                                                     │
│billiard.exceptions.TimeLimitExceeded: TimeLimitExceeded(60,)                                                                                                 │
│[2020-05-12 11:05:51,373: ERROR/MainProcess] Hard time limit (60s) exceeded for djcelery_email_send_multiple[44dfa3fd-f801-4d8d-b15f-eec05c1e04b4]            │
│[2020-05-12 11:05:51,522: ERROR/MainProcess] Process 'ForkPoolWorker-1' pid:17908 exited with 'signal 9 (SIGKILL)'                                            │
│[2020-05-12 11:09:45,841: INFO/MainProcess] Received task: djcelery_email_send_multiple[399a06d3-629f-42df-a926-9afb76afc98e]                                 │
│[2020-05-12 11:09:47,142: INFO/ForkPoolWorker-2] Task djcelery_email_send_multiple[399a06d3-629f-42df-a926-9afb76afc98e] succeeded in 1.2928511229692958s: 1

I am considering to use https://docs.celeryproject.org/en/latest/userguide/configuration.html#task-reject-on-worker-lost, but as docs states, it is probably not the best way to handle this error.

I guess the easiest way would be to handle such error in django-celery-email internally. Or maybe I am missing something?

@adi-
Copy link
Author

adi- commented May 12, 2020

Meanwhile, I'll try to use TASK_SOFT_TIME_LIMIT.

@adi-
Copy link
Author

adi- commented May 13, 2020

Meanwhile, I'll try to use TASK_SOFT_TIME_LIMIT.

Ok, nice. Above is working like a charm. Btw, my error with celery was Cannot reach CELERY_EMAIL_BACKEND django. I guess it would be wise to mention in django-celery-email's docs, that it is recommended to set TASK_SOFT_TIME_LIMIT.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant