Hardware acceleration on Apple Silicon with Metal plugin
See original GitHub issueHello!
I’m looking for a way to accelerate the XLA compiler for Apple’s M1 (elixir-nx/nx#490). Apple provides a PluggableDevice plugin with the METAL
platform, but it doesn’t include an XLA backend for it yet.
Do you have any plans to target M1’s GPU?
Issue Analytics
- State:
- Created 2 years ago
- Reactions:74
- Comments:49 (24 by maintainers)
Top Results From Across the Web
Metal Overview - Apple Developer
Metal powers hardware-accelerated graphics on Apple platforms by providing a low-overhead API, rich shading language, tight integration between graphics and ...
Read more >Apple Silicon Mac M1 natively supports TensorFlow 2.6 GPU ...
You can now leverage Apple's tensorflow-metal PluggableDevice in TensorFlow v2.5 for accelerated training on Mac GPUs directly with Metal.
Read more >Getting started with TensorFlow & PyTorch on Apple Metal GPU
This video is all you need to install both TensorFlow and PyTorch with Apple Metal Hardware acceleration on latest Apple M1 Chip based ......
Read more >Installing PyTorch on Apple M1 chip with GPU Acceleration
Tensorflow was the first framework to become available in Apple Silicon devices. Using the Metal plugin, Tensorflow can utilize the Macbook's GPU.
Read more >Will we ever see GPU acceleration on the Mac plugin?
This is already complete and the Metal backend (i.e. macOS GPUs) will be usable soon. This includes Apple Silicon M1 and M2 GPUs....
Read more >
Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free
Top Related Reddit Thread
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
yes JAX+IREE (and nod.ai tuned SHARK) on Apple Silicon is a high priority for us at Nod.ai. We hope to have the JAX pipeline flushed out once a few more upstream pieces land. But happy to help with anything if anyone else is trying it.
FYI - news from one of IREE’s users/contributors relevant to this: https://nod.ai/pytorch-m1-max-gpu/
They were successful at adapting and tuning IREE to the case. This is still quite early work but promising.