question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

[FEA] Dynamic Task Graph / Task Checkpointing

See original GitHub issue

TL;DR: By introducing task checkpointing where a running task can update its state on the scheduler, it is possible to reduce scheduler overhead, support long running tasks, and uses explicit worker-to-worker communication while maintaining resilient.

Motivation

As discussed in many issues and PRs (e.g. https://github.com/dask/distributed/issues/3783, https://github.com/dask/distributed/issues/854, https://github.com/dask/distributed/issues/3139, https://github.com/dask/dask/issues/6163), the scheduler overhead of Dask/Distribued can be a problem as the number of tasks increases. Many proposals involves optimizing the Python code through PyPy, Cython, Rust, or some other tool/language.

This PR propose an orthogonal approach that reduces the number of tasks and make it possible to encapsulate domain knowledge of specific operations into tasks – such as minimizing memory use, overlapping computation and communication, etc.

Related Approaches

Current Task Workflow

All tasks go through the follow flow:

**Client**  
  1. Graph creation  
  2. Graph optimization 
  3. Serialize graph 
  4. Send graph to scheduler 
**Scheduler** 
  5. Update graph 
  6. Send tasks, one at a time, to workers 
**Worker**  
  7. Execute tasks

Task Fusion

All tasks go through steps 1 to 4 but by fusing tasks (potential into SubgraphCallable) only a reduced graph goes through step 5 and 6, which can significantly easy the load on the scheduler. However, fusing tasks also limits the available parallelism thus it has its limits.

Task Generation

At graph creation, we use task generators to reduce the size of the graph. Particularly, in operations such as shuffle() that consist of up to n**2 number of tasks. This means that only steps 3 to 7 encounter all tasks. And if we allow the scheduler to execute python code, we can extend this to steps 5 to 7.

Submit Tasks from Tasks

Instead of implementing expensive operations such as shuffle() in a task graph, we can use few long running jobs that use direct worker-to-worker communicate to bypass the scheduler altogether. This approach is very performance efficient but also has two major drawbacks:

  • It provides no resilient, if a worker disconnects unexpected the states of the long running jobs are all lost.
  • In cases such as shuffle(), this approach requires extra memory because the inputs to the long running jobs must be in-memory until the jobs completes. Something that can be an absolute deal breaker https://github.com/dask/dask/pull/6051.

Proposed Approach

Dynamic Task Graph / Task Checkpointing

At graph creation, we use dynamic tasks to reduce the size of the graph and encapsulate domain knowledge of specific operations. This means that only step 7 encounters all tasks.

Dynamic tasks are regular tasks that are optimized, scheduled, and executed on workers as regular tasks. It is only when they use checkpointing that they differ. The following is the logic flow when a running task calls checkpointing:

  1. A task running on a worker sends a task update to the scheduler that contains:
    • New keys that is now in-memory on the worker
    • New keys that the task now depend on
    • Existing keys that the task doesn’t depend on anymore
    • A new task (function & key/literal arguments) that replaces the existing task.
  2. The scheduler updates relevant TaskStates and release keys that no one depend on anymore.
  3. If all dependencies are satisfied, the task can now be rescheduled from its new state. If not, the task transits to the waiting state.

Any thoughts? Is it something I should begin implementing?

cc. @mrocklin, @quasiben, @rjzamora, @jakirkham

Issue Analytics

  • State:closed
  • Created 3 years ago
  • Comments:6 (6 by maintainers)

github_iconTop GitHub Comments

1reaction
sjperkinscommented, May 19, 2020

+1 on supporting this as a good idea, because I think it would make possible features that I wish to implement at a higher level of abstraction: The ability to support (1) while loops and (2) if-else statements as graph tasks.

In @madsbk’s terminology, these would be checkpoint tasks that submit new work to the scheduler depending on whether their respective logic conditions are satisfied.

@madsbk’s description of checkpointing:

  • A task running on a worker sends a task update to the scheduler that contains:
    • New keys that is now in-memory on the worker
    • New keys that the task now depend on
    • Existing keys that the task doesn’t depend on anymore
    • A new task (function & key/literal arguments) that replaces the existing task.

sounds conceptually similar to the way tensorflow handles it’s while_loop construct:

Note that while_loop calls cond and body exactly once (inside the call to while_loop, and not at all during Session.run()). while_loop stitches together the graph fragments created during the cond and body calls with some additional graph nodes to create the graph flow that repeats body until cond returns false.

In the dask realm, I was thinking along the following lines:

class WhileLoop(object):
    def __init__(self, condition, body):
        self.condition = condition
        self.body = body

    def __call__(self, *args):
        while self.condition(*args).compute():
            args = self.body(*args)

        return args

def cond(array):
    return array.sum() < 100

def body(array):
    return (array + 1,)


while_loop = WhileLoop(cond, body)
out = while_loop(da.zeros(10))
1reaction
jacobtomlinsoncommented, May 19, 2020

This sounds great. I’d been keen to help out with this.

Read more comments on GitHub >

github_iconTop Results From Across the Web

Dynamic Task Scheduling to Mitigate System Performance ...
ABSTRACT. Application scalability can be significantly impacted by node level performance variability in HPC. While previous studies.
Read more >
A concurrent dynamic task graph +
We present a simple algorithm for a concurrent dynamic-task graph. A processor that needs to execute a new task can query the task...
Read more >
Task graph of the FFT algorithm with four points. - ResearchGate
Figure 2 shows the task graph of the one-dimensional FFT algorithm [16] with four data points. This task graph can be divided in...
Read more >
Matching and Scheduling Algorithms for Minimizing Execution ...
scheduling tasks of a directed acyclic task graph. 3.1 DLS Algorithm. The dynamic level scheduling (DLS) algorithm is a compile.
Read more >
Three practical workflow schedulers for easy maximum ...
Its view of the task graph lends itself to a FIFO scheduling strategy. ... It allows for dynamic tasks by implementing a “rewrite”...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found