You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
It's common practice to add perform healtchecks inside containers in dockerized environment. If healthcheck fails do pass within defined parameters, docker daemon restarts the container. HTTP / uwsgi process usually allows the healthcheck using curl or uwsgi_curl. The situation is much more chaotic with servers, but it should be possible to provide some CLI to check that worker or scheduler is working properly.
Right now, I could add custom job that just returns some sentinel (e.g. fixed string). By kicking that job and checking for result, I could verify that worker process is up and running and processing kicked tasks. Similar check could be performed for scheduler. This is not very convenient, because I have to add a new module with custom healthcheck task to each project deployment (or to the project itself, which doesn't make a lot of sense).
What do you think about adding such healthcheck task to taskiq itself? It could provide new subcommand taskiq check or taskiq healthcheck (or maybe taskiq status?) that would kick the job and check its result. The exit code of this CLI tool would then signify the check result (it could also write some useful info to stdout). Scheduler check could be performed similarly (maybe by scheduling task for now()), but I didn't think that through yet.
The text was updated successfully, but these errors were encountered:
It's common practice to add perform healtchecks inside containers in dockerized environment. If healthcheck fails do pass within defined parameters, docker daemon restarts the container. HTTP / uwsgi process usually allows the healthcheck using
curl
oruwsgi_curl
. The situation is much more chaotic with servers, but it should be possible to provide some CLI to check that worker or scheduler is working properly.Right now, I could add custom job that just returns some sentinel (e.g. fixed string). By kicking that job and checking for result, I could verify that worker process is up and running and processing kicked tasks. Similar check could be performed for scheduler. This is not very convenient, because I have to add a new module with custom healthcheck task to each project deployment (or to the project itself, which doesn't make a lot of sense).
What do you think about adding such healthcheck task to taskiq itself? It could provide new subcommand
taskiq check
ortaskiq healthcheck
(or maybetaskiq status
?) that would kick the job and check its result. The exit code of this CLI tool would then signify the check result (it could also write some useful info tostdout
). Scheduler check could be performed similarly (maybe by scheduling task fornow()
), but I didn't think that through yet.The text was updated successfully, but these errors were encountered: