mpi4py.futures and COMM_WORLD.r(): Blocking calls
See original GitHub issueI am trying to use mpi4py.futures with MpiProcessPool
. When I do so, I aim to use the .send()
and .recv
methods of COMM_WORLD
to send data from the master process to the worker processes. However, the workers never receive any message and block execution. Could you let me know whether MpiProcessPool
is not meant to be used with send()
and recv()
?
Issue Analytics
- State:
- Created 2 years ago
- Comments:7 (4 by maintainers)
Top Results From Across the Web
mpi4py.futures — MPI for Python 3.1.4 documentation
This package provides a high-level interface for asynchronously executing callables on a pool of worker processes using MPI for inter-process communication.
Read more >MPI for Python
Python objects with non-blocking communication: from mpi4py import MPI comm = MPI.COMM_WORLD rank = comm.Get_rank() if rank == 0:.
Read more >mpi4py.futures.MPIPoolExecutor hangs at comm. ...
from mpi4py import MPI import mpi4py.futures as mp import sys def write(x): sys.stdout.write(x) ... Disconnect() calls at both the client and server side?...
Read more >troubleshooting mpi4py.futures for infiniband
I've thus either been using ``mpiexec python -m mpi4py.futures . ... backoff = Backoff() ... Blocking MPI calls are usually implemented with busy...
Read more >A Collaborative Research Blog – Page 5 – Tips and tricks on ...
access the MPI COMM WORLD communicator and assign it to a variable ... The second component is a Python script that uses mpi4py...
Read more >
Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free
Top Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Why would you like to know that? If the rationale is debugging/logging/profiling purposes then I’ll not object. However, if your algorithm/application will depend on that, then perhaps
mpi4py.futures
is not the right tool for you.Would it be enough to know the rank post-mortem, after the task completed? If that is the case, then you just need to make your task return
(MPI.COMM_WORLD.rank, task_result)
.Could you describe what you are trying to achieve exactly?
With
MPICommExecutor
you may have a better chance. However, you should use a duplicated communicator to not interfere with the master-worker communication, something like this:Sorry to ask one more questions here: Is it also not recommended to use
send()
andrecv()
withmpi4py.futures.MPICommExecutor
? My goal of sending messages back and forth is basically to know on which node a task will be submitted once a slot becomes available after a future has completed. If three futures complete almost at the same time and a new task will then be submitted to the pool, how can I find out to which rank it will go and thus which node will be used in the cluster?