question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

mpi4py.futures and COMM_WORLD.r(): Blocking calls

See original GitHub issue

I am trying to use mpi4py.futures with MpiProcessPool. When I do so, I aim to use the .send() and .recv methods of COMM_WORLD to send data from the master process to the worker processes. However, the workers never receive any message and block execution. Could you let me know whether MpiProcessPool is not meant to be used with send() and recv()?

Issue Analytics

  • State:closed
  • Created 2 years ago
  • Comments:7 (4 by maintainers)

github_iconTop GitHub Comments

1reaction
dalcinlcommented, May 1, 2021

Why would you like to know that? If the rationale is debugging/logging/profiling purposes then I’ll not object. However, if your algorithm/application will depend on that, then perhaps mpi4py.futures is not the right tool for you.

Would it be enough to know the rank post-mortem, after the task completed? If that is the case, then you just need to make your task return (MPI.COMM_WORLD.rank, task_result).

Could you describe what you are trying to achieve exactly?

With MPICommExecutor you may have a better chance. However, you should use a duplicated communicator to not interfere with the master-worker communication, something like this:

comm = MPI.COMM_WORLD
dup_comm = comm.Dup()
with MPICommExecutor(comm) as executor:
    if executor:
        .... # your code here
0reactions
blueskypaccommented, Apr 30, 2021

Sorry to ask one more questions here: Is it also not recommended to use send() and recv() with mpi4py.futures.MPICommExecutor? My goal of sending messages back and forth is basically to know on which node a task will be submitted once a slot becomes available after a future has completed. If three futures complete almost at the same time and a new task will then be submitted to the pool, how can I find out to which rank it will go and thus which node will be used in the cluster?

Read more comments on GitHub >

github_iconTop Results From Across the Web

mpi4py.futures — MPI for Python 3.1.4 documentation
This package provides a high-level interface for asynchronously executing callables on a pool of worker processes using MPI for inter-process communication.
Read more >
MPI for Python
Python objects with non-blocking communication: from mpi4py import MPI comm = MPI.COMM_WORLD rank = comm.Get_rank() if rank == 0:.
Read more >
mpi4py.futures.MPIPoolExecutor hangs at comm. ...
from mpi4py import MPI import mpi4py.futures as mp import sys def write(x): sys.stdout.write(x) ... Disconnect() calls at both the client and server side?...
Read more >
troubleshooting mpi4py.futures for infiniband
I've thus either been using ``mpiexec python -m mpi4py.futures . ... backoff = Backoff() ... Blocking MPI calls are usually implemented with busy...
Read more >
A Collaborative Research Blog – Page 5 – Tips and tricks on ...
access the MPI COMM WORLD communicator and assign it to a variable ... The second component is a Python script that uses mpi4py...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found