Confusing log for long running tasks: "dependency 'Task Instance Not Running' FAILED: Task is in the running state"
See original GitHub issueApache Airflow version: 1.10.* / 2.0.* / 2.1.*
Kubernetes version (if you are using kubernetes) (use kubectl version
): Any
Environment:
- Cloud provider or hardware configuration: Any
- OS (e.g. from /etc/os-release): Any
- Kernel (e.g.
uname -a
): Any - Install tools: Any
- Others: N/A
What happened:
This line in the TaskInstance log is very misleading. It seems to happen for tasks that take longer than one hour. When users are waiting for tasks to finish and see this in the log, they often get confused. They may think something is wrong with their task or with Airflow. In fact, this line is harmless. It’s simply saying “the TaskInstance is already running so it cannot be run again”.
{taskinstance.py:874} INFO - Dependencies not met for <TaskInstance: ... [running]>, dependency 'Task Instance Not Running' FAILED: Task is in the running state
{taskinstance.py:874} INFO - Dependencies not met for <TaskInstance: ... [running]>, dependency 'Task Instance State' FAILED: Task is in the 'running' state which is not a valid state for execution. The task must be cleared in order to be run.
What you expected to happen:
The confusion is unnecessary. This line should be silenced in the log. Or it should log something clearer.
How to reproduce it:
Any task that takes more than an hour to run has this line in the log.
Issue Analytics
- State:
- Created 2 years ago
- Reactions:12
- Comments:18 (13 by maintainers)
Top GitHub Comments
After some more investigation, it’s very likely we see this log appearing an hour after a long running task started because of the default
visibility_timeout
setting in Celery. This code in default_celery.py setsvisibility_timeout
to 21600 only if the broker_url starts with redis or sqs. In our case we are using redis sentinels so it’s still redis although the URL starts withsentinel
. Therefore thevisibility_timeout
is left at 3600 which is the default according to celery documentation. The weird thing is that after I tried to manually changevisibility_timeout
to a very large integer in airflow.cfg, the same log still showed up exactly an hour after a task started. So it seems changingvisibility_timeout
in this case does not make any difference. Not sure if anyone experienced the same.@david30907d maybe try changing
visibility_timeout
to a large number in your setup and see if it still happens after an hour. If it stops for you, it meansvisibility_timeout
is probably the cause. There may be something wrong in our own setup causing changingvisibility_timeout
not to take effect.In the case where the visibility timeout is reached, it’s confusing that there is not a clear log line that the task has been killed for taking too long to complete.
(If that’s indeed what is happening.)
@potiuk is it the case, that the Celery task is killed or is it simply no longer streaming logs into Airflow at that point?