question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

Internal worker state transitions

See original GitHub issue

In #4360 various deadlock situations are investigated where some of them are connected to the way valid states and state transitions of the worker[1] TaskState objects are defined.

The current documentation is slightly outdated since dependencies and runnable tasks where consolidated in https://github.com/dask/distributed/pull/4107. The current state transitions are now following the pipeline (omitting long-running, error, constrained for the sake of clarity [2])

worker-state-master

What’s remarkable about this transition pipeline is that virtually all states allow a transition to memory and there are multiple allowed transition paths which are only allowed in very specific circumstances and only upon intervention via the scheduler. For instance, a task in state flight may be transitioned via ready to executing but this is only possible if the worker actually possess the knowledge about how to execute a given task, i.e. the TaskState object possesses a set attribute runspec. This attribute is usually only known to the worker if the scheduler intents for this worker to actually execute the task. This transition path is, however, allowed since a dependency, i.e. a task without the knowledge of how to compute it, is reassigned by the scheduler for computation on this worker. This may happen if the worker where the task was intended to be computed on originally is shut down.

This ambiguity is essentially introduced by not distinguishing between dependencies and runnable tasks anymore. What I would propose is to make this distinction explicit in the state of the tasks. Consider the following pipeline

worker-state-proposed

Every task starts off in new. This is effectively a dummy state and could be omitted. It represents a known task which hasn’t been classified into “can the worker compute this task or not”. Based on the answer of this question it is put into the states

waiting_for_dependencies : This task is intended to be computed by this worker. Once all dependencies are available on this worker, it is supposed to be transitioned to ready to be queued up for execution. (No dependencies is a special subset of this case)

waiting_to_fetch : This task is not intended to be computed on this worker but the TaskState on this worker merely is a reference to a remote data key we are about to fetch.

The red transition is only possible via scheduler interference once the scheduler reassigns a task to be computed on this worker. This is relatively painless as long as the TaskState is in a valid state (in particular runspec is set)

Purple is similar but in this case the worker was already trying to fetch a dependency. It is similar to the red transition with the exception that a gather_dep was already scheduled and this worker is currently trying to fetch a result. If that was actually successful we might be in a position where we fast track the “to be executed” task.

I believe defining these transitions properly is essential and we should strive to set up a similar, if not identical, state machine as in the scheduler (w/ recommendations / chained state transitions). This is especially important since there are multiple data structures to keep synchronized (e.g. Worker.ready, Worker.data_needed, Worker.in_flight_workers to name a few) on top of the tasks themselves.

Last but not least, there have been questions around how Worker.release_key works, when it is called and what data is actually stored in Worker.data (is it always a subset of tasks or not). I believe settling the allowed state transitions should help settle these questions.

Alternative: Instead of implementing red/purple we could just reset to new and start all transitions from scratch. that would help reduce the number of edges/allowed transitions but would pose similar problems as the purple path in case the gather_dep is still running

My open questions:

  • How would speculative task assignment fit into this?
  • How is work stealing mapped here? (That’s kind of the reverse of red/purple, isn’t it?)
  • Do Actors impact this in any kind of way?

cc @gforsyth

[1] The TaskState objects of the scheduler follow a different definition and allow different state transitions. I consider the consolidation of the two out of scope for this issue. [2] Especially the state error is very loosely defined and tasks can be transitioned to error from almost every start state


Possibly related issues https://github.com/dask/distributed/issues/4724 https://github.com/dask/distributed/issues/4587 https://github.com/dask/distributed/issues/4439 https://github.com/dask/distributed/issues/4550 https://github.com/dask/distributed/issues/4133 https://github.com/dask/distributed/issues/4721 https://github.com/dask/distributed/issues/4800 https://github.com/dask/distributed/issues/4446

Issue Analytics

  • State:closed
  • Created 3 years ago
  • Comments:6 (6 by maintainers)

github_iconTop GitHub Comments

1reaction
fjettercommented, Jun 18, 2021

We’ve recently merged a big PR which addresses some of the deadlock situations we’ve seen lately. See https://github.com/dask/distributed/pull/4784 We currently do not have reason to believe that there are more of these deadlock situations and will therefore pause on the big worker state refactoring this issue triggered in favour of maintaining stability for a while. We will, of course, try to address missing state transitions as we go but will no longer refactor the entire state machine unless necessary.

The deadlock fixes will be released later today, see dask/community#165

1reaction
mrocklincommented, Apr 12, 2021

I would love to see that image (or some future version of that image) end up in developer docs at distributed.dask.org. I think that that would help future folks.

On Mon, Apr 12, 2021 at 12:13 PM Gil Forsyth @.***> wrote:

This is great, @fjetter https://github.com/fjetter – thanks for sharing it!

  • We will never remove or delete information from a TaskState instance. In particular, the intention is never inferred by whether or not a given attribute exists, is null, etc. This is particularly important for the runnable vs not-runnable transition which should be replaced with a dedicated attribute instead of basing this decision on runspec is None. This allows for easier recovery for the state machine by simply “starting from the beginning”

I’m very much on board with this (and the rest of your points) – clarifying question here, the state of a TaskState instance changes – these are currently tracked in self.story but would we want to include a history of previous states / transitions in the TaskState instance itself?

— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/dask/distributed/issues/4413#issuecomment-817981861, or unsubscribe https://github.com/notifications/unsubscribe-auth/AACKZTF5GXGEJLRH2BIRAZDTIMS4RANCNFSM4V2JONHQ .

Read more comments on GitHub >

github_iconTop Results From Across the Web

Composite state and internal transitions - uml - Stack Overflow
Internal transitions inherited from superstates at any level of nesting act as if they were defined directly in the currently active state.
Read more >
State Transition Diagram for Work Distribution - ResearchGate
Figure 2 illustrates the lifecycle of a work item in the form of a state transition diagram from the time that a work...
Read more >
How To Create an Employee Transition Plan (With Example)
These types of transitions are typically the most challenging ones for organizations to handle, which is why they require such detailed plans.
Read more >
How to Transition an Employee Into a Better Role
Shifting an employee into a new role can be challenging, but when it's the right fit, it's worth the move. Here's how you...
Read more >
Working with Transitions - QM
Internal transitions are different from self-transitions (loops starting and terminating on the same state), because self-transitions cause exit of the state ( ...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found