question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

Support M1 GPU in FARMReader

See original GitHub issue

Is your feature request related to a problem? Please describe. Since haystack v1.6 we have support for pytorch 1.12 which also means support for the M1 GPU. However, we currently initialize the device to be either cpu or cuda depending on availability and if the user passes in the use_gpu=True parameter. For GPU use on the M1, pytorch actually uses the mps backend. See: https://pytorch.org/docs/stable/notes/mps.html

If we could allow the users to pass in the actual device into the FARMReader then this might support of GPU training and inference on the M1 possible.

Describe the solution you’d like Allow the user to pass in devices=[<device>] into FARMReader.__init__ and use these devices in initialize_device_settings. We could make this non-breaking by making this an optional argument to the reader init and the device initialization.

Describe alternatives you’ve considered A clear and concise description of any alternative solutions or features you’ve considered.

Additional context Add any other context or screenshots about the feature request here.

Issue Analytics

  • State:closed
  • Created a year ago
  • Comments:14 (13 by maintainers)

github_iconTop GitHub Comments

2reactions
sjrlcommented, Aug 18, 2022

That’s great! I would say that anywhere the user passes an option to initialize_device_settings should have the option of passing a list of devices instead. Similar to what is already done in this load function for the Inferencer https://github.com/deepset-ai/haystack/blob/be127e5b61e60f59292a1e5d73676eb34691f668/haystack/modeling/infer.py#L175-L176

where devices is of type https://github.com/deepset-ai/haystack/blob/be127e5b61e60f59292a1e5d73676eb34691f668/haystack/modeling/infer.py#L128

So what is inconsistent at the moment is that the devices option is only supported in some places in Haystack. And I think we should support it everywhere where the user can pass in the use_gpu boolean.

1reaction
sjrlcommented, Aug 18, 2022

Yes, it seems to be already used everywhere, but we should make sure that it does get used in addition to making sure we provide devices parameter.

Yes I agree.

Read more comments on GitHub >

github_iconTop Results From Across the Web

TrainingArguments does not support `mps` device (Mac M1 ...
When running the Trainer.train on a machine with an MPS GPU, it still just uses the CPU. I expected it to use the...
Read more >
Tales of the M1 GPU - Asahi Linux
Hello everyone, Asahi Lina here!✨. marcan asked me to write an article about the M1 GPU, so here we are~! It's been a...
Read more >
New GPU-Acceleration for PyTorch on M1 Macs! + ... - YouTube
GPU -acceleration on Mac is finally here!Today's deep learning models owe a great deal of their exponential performance gains to ever ...
Read more >
Asahi Linux BIG Update Brings GPU Driver Support For M1 ...
Finally, the Asahi team has announced the first Apple GPU driver for M1 and M2 Macs. This is the biggest update to Asahi...
Read more >
farm-haystack - PyPI
Haystack is an end-to-end framework that enables you to build powerful and production-ready pipelines for different search use cases.
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found