question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

RFC: explicit shared memory

See original GitHub issue

With the increasing availability of large machines, it seems to be the case that more workloads are being run as many processes on a single node. In a workflow where a single large array would be passed to workers, currently this might be done by passing the array from the client (bad), using scatter (OK) or loading data in the workers (good, but not efficient if we want one big array).

A large memory and transfer cost might be saved by putting the array into posix shared memory and referencing it from the workers. If we host the array is in shm, there is no copy or de/ser cost (but there is an OS call cost to attach to the shm). It could be appropriate for ML workflows where every task wants to make use of the whole of a large dataset (as opposed to chunking the dataset as dask.array operations do). sklearn with joblib is an example where we explicitly recommend scattering large data.

As a really simple example, see my gist, in which the user has to explicitly wrap a numpy array in the client, and then dask workers no longer need to have their own copies. Note that SharedArray is just a simple way to pass the array metadata as well as its buffer; it works for py37 and probably earlier.

To be clear: there is no suggestion of adding anything to the existing distributed serialisation code, because it’s really hard to try to guess when a user might want to use such a thing. It should be explicitly opt-in.

Further,

  • Similar techniques could be used to wrap arrow or pandas data, although no one probably wants to delve through existing in-memory objects to find the underlying buffers.
  • Pickle V5 works on buffers and memoryviews, so might be a generic helper here

cc @crusaderky @quasiben @jsignell

Issue Analytics

  • State:open
  • Created 3 years ago
  • Reactions:4
  • Comments:24 (17 by maintainers)

github_iconTop GitHub Comments

3reactions
appliocommented, Feb 9, 2021

appropriate for ML workflows where every task wants to make use of the whole of a large dataset

This has been a recurring need in my projects where the entirety of the data needs to be accessible to all workers yet duplicating the data for each worker would exceed available system memory. I have primarily used NumPy arrays and pandas DataFrames backed by shared memory thus far – it would be cool to use dask as part of this. My situation might only qualify as a single data point but I am not the only weirdo with this sort of need.

It should be explicitly opt-in.

+1 on this as well.

2reactions
appliocommented, Feb 9, 2021

I think this already works with multiprocessing.shared_memory.SharedMemory – see the 2nd code example in the docs where a NumPy array is backed by shared memory: https://docs.python.org/3/library/multiprocessing.shared_memory.html

The implementation behind multiprocessing.shared_memory is POSIX shared memory on all systems where that’s available and Named Shared Memory on Windows. This makes for a cross-platform API for shared memory that’s tested and viable on quite a variety of different OSes and platform types.

Similar techniques could be used to wrap arrow or pandas data, although no one probably wants to delve through existing in-memory objects to find the underlying buffers.

A more general tool to wrap pandas and pandas-like objects was developed prior to the release of Python 3.8 (and the shared memory constructs in the Python Standard Library) but was not suitable for inclusion in the core because it was not general purpose enough even if it was useful.

Read more comments on GitHub >

github_iconTop Results From Across the Web

IBM's Shared Memory Communications over RDMA (SMC-R ...
IBM's Shared Memory Communications over RDMA (SMC-R) Protocol RFC 7609 ; RFC - Informational (August 2015). Was draft-fox-tcpm-shared-memory-rdma (individual).
Read more >
Information on RFC 7609 - » RFC Editor
This document describes IBM's Shared Memory Communications over RDMA (SMC-R) protocol. This protocol provides Remote Direct Memory Access (RDMA) ...
Read more >
[RFC PATCH 0/5] Shared memory for shared extents
[RFC PATCH 0/5] Shared memory for shared extents @ 2021-10-22 20:15 Goldwyn ... falling back to block device's i_mapping to read pages which...
Read more >
sysrepo: Introduction
Sysrepo does not strictly conform to this RFC but was developed to also have a ... operation (more in schemas); shared memory is...
Read more >
[virtio-comment] [RFC] ivshmem v2: Shared memory device ...
- The read/write shared memory section has to be of the same size for > all peers. The size can be zero. >...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found