`ALL_DONE` trigger rule not respected in TaskFlow API for upstream failures
See original GitHub issueApache Airflow version
2.3.0
What happened
Hello!
I’m using taskflow api for my dag and was trying to run the last task even if the previous task failed, I used @task(trigger_rule=TriggerRule.ALL_DONE) but i still got the ending task failed too.
The UI confirms that I declared the trigger_rule “all_done”

What you think should happen instead
No response
How to reproduce
This is the whole code
import logging
import pendulum
from airflow.decorators import task, dag
from airflow.utils.trigger_rule import TriggerRule
@dag(
schedule_interval=None,
start_date=pendulum.datetime(2021, 5, 10, tz="UTC"),
catchup=False,
)
def taskflow_trigger():
@task()
def first_task():
return "first_task"
@task
def second_task(value):
return "second_task"
@task
def task_to_fail(value):
data = {"test": 1}
val = data["not_here"]
return val
@task(trigger_rule=TriggerRule.ALL_DONE)
def end(value):
return "yes"
op_1 = first_task()
op_3 = task_to_fail(op_1)
op_2 = second_task(op_1)
end([op_3, op_2])
taskflow_trigger = taskflow_trigger()
Operating System
"Debian GNU/Linux 11 (bullseye)
Versions of Apache Airflow Providers
No response
Deployment
Docker-Compose
Deployment details
No response
Anything else
No response
Are you willing to submit PR?
- Yes I am willing to submit a PR!
Code of Conduct
- I agree to follow this project’s Code of Conduct
Issue Analytics
- State:
- Created a year ago
- Comments:6 (4 by maintainers)
Top Results From Across the Web
Concepts — Airflow Documentation - Apache Airflow
Trigger Rules ¶. Though the normal workflow behavior is to trigger tasks when all their directly upstream tasks have succeeded, Airflow allows for...
Read more >Manage DAG and task dependencies in Airflow
Downstream task: A dependent task that cannot run until an upstream task reaches a specified state. ... Dependencies with the TaskFlow API. Trigger...
Read more >Airflow : Run a task when some upstream is skipped by ...
It would seem like branching workflow could be a solution, even though not optimal, but now final will not respect failures of upstream...
Read more >Airflow Trigger Rules: All you need to know! - Marc Lamberti
With this simple trigger rule, your task gets triggered if no upstream tasks are skipped. If they are all in success or failed....
Read more >Task Triggers and Failures - An Introduction to Apache Airflow
In the workflow above, task5 has the trigger_rule set to all_success by default. This implies that task5 will be executed only when tasks...
Read more >
Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free
Top Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found

Since an XCom push can never be None[^1], we can probably just set the value to None in this case. Combining this with #24401, we probably need to do some additional checks in
XComArg.resolve()to take account the current task’s trigger rule, and resolve to None when appropriate.[^1]: A design decision I didn’t agree with but that ship sailed long ago.
Duplicate of #24338 (It’s the same underlying cause)