question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

RoBERTa example crashes on libnat import

See original GitHub issue

There is example how to use RoBERTa model on page: https://github.com/pytorch/fairseq/tree/master/examples/roberta

import torch
roberta = torch.hub.load('pytorch/fairseq', 'roberta.large')
roberta.eval()  # disable dropout (or leave in train mode to finetune)

It doesn’t work after commit 86857a58bf2919c7bec3c29c58234aa4c434d566 with error

ImportError                               Traceback (most recent call last)
<ipython-input-1-9914d0fa65af> in <module>
      1 import torch
----> 2 roberta = torch.hub.load('pytorch/fairseq', 'roberta.large')
      3 roberta.eval()  # disable dropout (or leave in train mode to finetune)

/usr/local/lib/python3.6/dist-packages/torch/hub.py in load(github, model, *args, **kwargs)
    334     sys.path.insert(0, repo_dir)
    335 
--> 336     hub_module = import_module(MODULE_HUBCONF, repo_dir + '/' + MODULE_HUBCONF)
    337 
    338     entry = _load_entry_from_hubconf(hub_module, model)

/usr/local/lib/python3.6/dist-packages/torch/hub.py in import_module(name, path)
     68         spec = importlib.util.spec_from_file_location(name, path)
     69         module = importlib.util.module_from_spec(spec)
---> 70         spec.loader.exec_module(module)
     71         return module
     72     elif sys.version_info >= (3, 0):

/usr/lib/python3.6/importlib/_bootstrap_external.py in exec_module(self, module)

/usr/lib/python3.6/importlib/_bootstrap.py in _call_with_frames_removed(f, *args, **kwds)

~/.cache/torch/hub/pytorch_fairseq_master/hubconf.py in <module>
      6 import functools
      7 
----> 8 from fairseq.hub_utils import BPEHubInterface as bpe  # noqa
      9 from fairseq.hub_utils import TokenizerHubInterface as tokenizer  # noqa
     10 from fairseq.models import MODEL_REGISTRY

~/.cache/torch/hub/pytorch_fairseq_master/fairseq/__init__.py in <module>
      8 
      9 import fairseq.criterions  # noqa
---> 10 import fairseq.models  # noqa
     11 import fairseq.modules  # noqa
     12 import fairseq.optim  # noqa

~/.cache/torch/hub/pytorch_fairseq_master/fairseq/models/__init__.py in <module>
    126     if not file.startswith('_') and not file.startswith('.') and (file.endswith('.py') or os.path.isdir(path)):
    127         model_name = file[:file.find('.py')] if file.endswith('.py') else file
--> 128         module = importlib.import_module('fairseq.models.' + model_name)
    129 
    130         # extra `model_parser` for sphinx

/usr/lib/python3.6/importlib/__init__.py in import_module(name, package)
    124                 break
    125             level += 1
--> 126     return _bootstrap._gcd_import(name[level:], package, level)
    127 
    128 

~/.cache/torch/hub/pytorch_fairseq_master/fairseq/models/insertion_transformer.py in <module>
      7 import torch
      8 import torch.nn.functional as F
----> 9 from fairseq import libnat
     10 from fairseq.models import register_model, register_model_architecture
     11 from fairseq.models.levenshtein_transformer import (

ImportError: cannot import name 'libnat'

Issue Analytics

  • State:closed
  • Created 4 years ago
  • Comments:12 (3 by maintainers)

github_iconTop GitHub Comments

8reactions
zhenpinglicommented, Mar 30, 2020

I found the solution. Just run python setup.py build_ext --inplace in the fairseq source folder and all extensions will be built. Besides, if you cannot use libnat_cuda, you need to set a CUDA_HOME path.

3Q for you help.This have take hours of my time . FIRST shoud do python setup.py build_ext --inplace and pip install --editable . at index Second MUST set a CUDA_HOME path the order is: put export PATH=/usr/lib/cuda/bin:$PATH export LD_LIBRARY_PATH=/usr/lib/cuda/bin/lib64:$LD_LIBRARY_PATH in ./bashrc export CUDA_HOME=/usr/lib/cuda lib/cuda is nvida driver path Final source ~/.bashrc

4reactions
myleottcommented, Sep 30, 2019

The problem is that torch.hub doesn’t build the extensions causing the import to fail.

This should be fixed now: https://github.com/pytorch/fairseq/commit/acb6fba005f45e363a6da98d7ce79c36c011d473

Read more comments on GitHub >

github_iconTop Results From Across the Web

RoBERTa example crashes on libnat import #1199 - GitHub
I found the solution. Just run python setup.py build_ext --inplace in the fairseq source folder and all extensions will be built. Besides, if ......
Read more >
Colab RAM crash error - Fine-tuning RoBERTa in Colab
Hi, I'm trying to fine-tune my first NLI model with Transformers on Colab.
Read more >
Clr - ALBA.Net
Nqe clan, 1926 events, Qpainterpathstroker example, Bmw xi 2007, Wpf 3d flip card! ... Google guava javadoc download, Flight attendant crash pads seattle, ......
Read more >
FEBRUARy - Historical Society of Riverton, NJ
Imported Sardines in OiL.. ......... IOc can ... Second 800r of Roberta BoIlclJn, (mtraDce on IIaIJl at.) RI ... rton ... Sample Booka...
Read more >
Republican Party Sweeps All Offices In Davie County
Roberta Handlin gave a ... Here's an example of a 24 percent saving: A typical famiCy of 4 iwith anmial income of $7,500...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found