Task resulted in "Pending" state
See original GitHub issueDescription
A clear description of the bug
Very simply, a task in my flow resulted in a “Pending” state. pastebin – Last Portion of Flow Log
Expected Behavior
What did you expect to happen instead?
Per @kmoonwright and docs, the Pending state should really only be transitional and not a result state of a task. I would have expected it to wait until a Finished state (either a Result or Failure).
Reproduction
A minimal example that exhibits the behavior.
I don’t know how to reproduce this in a minimal way, but the workflow I am working on is entirely open to the public, data access and code included.
git clone https://github.com/CouncilDataProject/cdptools.git
cd cdptools
git checkout feature/update-index-pipeline
pip install -e .[seattle]
run_cdp_pipeline EventIndexPipeline configs/seattle-event-index-pipeline.json
Environment
Any additional information about your environment
{
"config_overrides": {},
"env_vars": [],
"system_information": {
"platform": "macOS-10.15.5-x86_64-i386-64bit",
"prefect_version": "0.12.0+82.ge6b29666b",
"python_version": "3.8.1"
}
}
Comments
My general idea is that it may be a bug? The get_minutes_item_file_details task has max_retries=3 and retry_delay=timedelta(seconds=3) attached so maybe the map “finishes” and sends the signal to the downstream tasks to move on but the downstream checks for all data and sees that there were a couple of failures that are retrying? Just an idea and not sure how the internals of Prefect work too deeply.
Issue Analytics
- State:
- Created 3 years ago
- Comments:7 (3 by maintainers)

Top Related StackOverflow Question
Just to chime in here on this in particular:
There actually are normal ways for a task to seemingly “end” in a Pending state; for example, the following flow:
If you run this w/ Prefect Cloud, you’ll find that the
failing_taskgoesPending -> Running -> Failed -> Retryingand the Flow still visitssecond_taskbut determines it is not yet ready to run (because it’s upstream is not complete). This will result in a log of the form(Of course, after 11 minutes, the flow will be rerun and this task will eventually end in a
TriggerFailedstate).I think there are a few other ways this could occur naturally (e.g.,
ClientErrors when tasks fail to set their final state or if you use amanual_onlytrigger).Not necessarily suggesting that’s the case here, but thought it might be useful info.
Thanks a lot @JacksonMaxfield !! 😄