HuggingFace Model Hub (summarisation) - models not working locally (404 not found)
See original GitHub issueEnvironment info
transformers
version: 4.10.0- Platform: Linux-5.11.0-36-generic-x86_64-with-glibc2.29
- Python version: 3.8.10
- PyTorch version (GPU?): not installed (NA)
- Tensorflow version (GPU?): 2.6.0 (False)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
–>
Information
I am using a text summarisation model from HuggingFace Model Hub. However, this issue occurs regardless of what model I use.
The problem arises when using any text summarisation model from HuggingFace Model Hub locally.
The tasks I am working on is dialogue summarisation.
To reproduce
Steps to reproduce the behavior:
- Run this code locally with the environment I specified in the beginning:
from transformers import pipeline
summarizer = pipeline("summarization", model="lidiya/bart-large-xsum-samsum")
conversation = '''Jeff: Can I train a 🤗 Transformers model on Amazon SageMaker?
Philipp: Sure you can use the new Hugging Face Deep Learning Container.
Jeff: ok.
Jeff: and how can I get started?
Jeff: where can I find documentation?
Philipp: ok, ok you can find everything here. https://huggingface.co/blog/the-partnership-amazon-sagemaker-and-hugging-face '''
print(summarizer(conversation))
- Output is:
2021-09-28 14:20:06.034022: W tensorflow/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libcudart.so.11.0'; dlerror: libcudart.so.11.0: cannot open shared object file: No such file or directory
2021-09-28 14:20:06.034044: I tensorflow/stream_executor/cuda/cudart_stub.cc:29] Ignore above cudart dlerror if you do not have a GPU set up on your machine.
404 Client Error: Not Found for url: https://huggingface.co/lidiya/bart-large-xsum-samsum/resolve/main/tf_model.h5
404 Client Error: Not Found for url: https://huggingface.co/lidiya/bart-large-xsum-samsum/resolve/main/tf_model.h5
Traceback (most recent call last):
File "test.py", line 2, in <module>
summarizer = pipeline("summarization", model="lidiya/bart-large-xsum-samsum")
File "/home/teodor/Desktop/test/env/lib/python3.8/site-packages/transformers/pipelines/__init__.py", line 429, in pipeline
framework, model = infer_framework_load_model(
File "/home/teodor/Desktop/test/env/lib/python3.8/site-packages/transformers/pipelines/base.py", line 145, in infer_framework_load_model
raise ValueError(f"Could not load model {model} with any of the following classes: {class_tuple}.")
ValueError: Could not load model lidiya/bart-large-xsum-samsum with any of the following classes: (<class 'transformers.models.auto.modeling_tf_auto.TFAutoModelForSeq2SeqLM'>, <class 'transformers.models.bart.modeling_tf_bart.TFBartForConditionalGeneration'>).
Expected behavior
On Google Colab and on HugginFace website a string is outputted containing the summary of the inputted text: “Jeff wants to train a Transformers model on Amazon SageMaker. He can use the new Hugging Face Deep Learning Container. The documentation is available on HuggingFace.co and on the blog, Jeff can find it here. . . Jeff can train a model on Huging Face.co.”
Why is it not working locally? Any help would be much appreciated. I’ve been trying to solve this problem for the past few days but I couldn’t find a working solution so far. Thank you!
Issue Analytics
- State:
- Created 2 years ago
- Comments:7 (3 by maintainers)
It seems that pytorch is not installed, you should install pytorch to be able to use this model.
@patil-suraj you helped me find the solution. The only thing that worked for me is:
Thank you a lot for your patience. Hope many people see this.