question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

Accelerator not recognizing TPU in Google Colab and Kaggle Kernels

See original GitHub issue

I installed and imported accelerate in both Kaggle Kernels and Google Colab with TPU turned on but it doesn’t seem to detect the TPU and instead detects CPU when running the following code:

$ pip install -q accelerate
import accelerate
acc = accelerate.Accelerator()
device = acc.device
print(device)

The above snippet just outputs cpu when ran on both aforementioned platforms with TPU enabled.

Is there something that I am doing wrong?

PyTorch version: 1.7.0 Python version: 3.7.9

Issue Analytics

  • State:closed
  • Created 2 years ago
  • Comments:18 (11 by maintainers)

github_iconTop GitHub Comments

2reactions
sguggercommented, Apr 26, 2021

This is fixed by #44, there is now a notebook_launcher that helps you run your training function in a colab or Kaggle notebook. Will add some examples soon!

2reactions
romanosscommented, Apr 19, 2021

this is a kaggle notebook from the current shopee comp - easiest probably: join the comp and upload shopee-pytorch-eca-nfnet-l0-image-training.zip

I think the only thing you have to change is RETRAIN_MODEL=‘’ and switch USE_TPU_AND_ACCELERATE

a small bug: in train_fn() change data[k] = v.to(Config.DEVICE) to data[k] = v #in case of USE_TPU_AND_ACCELERATE=True

Read more comments on GitHub >

github_iconTop Results From Across the Web

TPU not working in kernel | Data Science and Machine Learning
It looks like the accelerator is not getting selected correctly. Has this been a persistent issue for you? Normally I would recommend to...
Read more >
TPUs in Colab - Colaboratory
Navigate to Edit→Notebook Settings; select TPU from the Hardware Accelerator drop-down. Next, we'll check that we can connect to the TPU:.
Read more >
Tensor Processing Unit (TPU) - PyTorch Lightning
To get a TPU on colab, follow these steps: Go to https://colab.research.google.com/. Click “new notebook” (bottom right of pop-up).
Read more >
Google Colab TPU: TF.data and TF.keras not working
Replace : out = model.fit(dataset, batch_size=params['batch_size'], epochs=params['epochs'], validation_data=[x_val, y_val], verbose=0).
Read more >
Training on TPUs with Accelerate - Hugging Face
While on a TPU that last part is not as important, a critical part to ... TPUs such as those provided in Kaggle...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found