Support M1 GPU in FARMReader
See original GitHub issueIs your feature request related to a problem? Please describe.
Since haystack v1.6 we have support for pytorch 1.12 which also means support for the M1 GPU. However, we currently initialize the device to be either cpu
or cuda
depending on availability and if the user passes in the use_gpu=True
parameter. For GPU use on the M1, pytorch actually uses the mps
backend. See: https://pytorch.org/docs/stable/notes/mps.html
If we could allow the users to pass in the actual device into the FARMReader then this might support of GPU training and inference on the M1 possible.
Describe the solution you’d like
Allow the user to pass in devices=[<device>]
into FARMReader.__init__
and use these devices in initialize_device_settings
. We could make this non-breaking by making this an optional argument to the reader init and the device initialization.
Describe alternatives you’ve considered A clear and concise description of any alternative solutions or features you’ve considered.
Additional context Add any other context or screenshots about the feature request here.
Issue Analytics
- State:
- Created a year ago
- Comments:14 (13 by maintainers)
Top GitHub Comments
That’s great! I would say that anywhere the user passes an option to
initialize_device_settings
should have the option of passing a list ofdevices
instead. Similar to what is already done in this load function for the Inferencer https://github.com/deepset-ai/haystack/blob/be127e5b61e60f59292a1e5d73676eb34691f668/haystack/modeling/infer.py#L175-L176where
devices
is of type https://github.com/deepset-ai/haystack/blob/be127e5b61e60f59292a1e5d73676eb34691f668/haystack/modeling/infer.py#L128So what is inconsistent at the moment is that the
devices
option is only supported in some places in Haystack. And I think we should support it everywhere where the user can pass in theuse_gpu
boolean.Yes I agree.