WandB incorrectly recognising transformers import as PyTorch (should be dynamic to PT or TF)
See original GitHub issue- Weights and Biases version: 0.9.5
- Python version: 3.6
- Operating System: MacOS
Description
Using colab, I was trying to run a sweep on a tf.keras model, the embeddings for which were created using huggingface’s transformers tokenizer.
However, on import of transformers, WandB by default assumes you want to run a PyTorch model (checked by calling wandb.config.as_dict()
, and so when running the sweep, despite the outputs showing correct setup, nothing happens after the links to the various project pages/runs are posted in the console output.
What I Did
!pip install transformers
!pip install wandb
import transformers
import wandb
!wandb login
wandb.init()
wandb.config.as_dict()
> {'_wandb': {'desc': None,
'value': {'cli_version': '0.9.5',
'framework': 'torch',
'huggingface_version': '3.0.2',
'is_jupyter_run': True,
'is_kaggle_kernel': False,
'python_version': '3.6.9'}}}
A workaround for now is to load a separate colab workbook, convert the inputs to embeddings as normal using the Huggingface tokenizer, but then save the embeddings as a numpy array to your colab directory.
In your other wandb notebook, don’t import transformers, just load the saved embeddings. On doing this, wandb.config.as_dict()
gives 'framework: ‘tensorflow’, and the sweep runs as expected.
Issue Analytics
- State:
- Created 3 years ago
- Comments:7 (3 by maintainers)
Top GitHub Comments
Issue-Label Bot is automatically applying the label
bug
to this issue, with a confidence of 0.90. Please mark this comment with 👍 or 👎 to give our bot feedback!Links: app homepage, dashboard and code for this bot.
In the past year we’ve majorly reworked the CLI and UI for Weights & Biases. We’re closing issues older than 6 months. Please comment to reopen.