question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

How to load a fine-tuned model and inference after running run_clip.py?

See original GitHub issue

System Info

  • transformers version: 4.22.0.dev0
  • Platform: Linux-3.10.0-957.el7.x86_64-x86_64-with-glibc2.17
  • Python version: 3.9.12
  • Huggingface_hub version: 0.8.1
  • PyTorch version (GPU?): 1.12.0+cu102 (True)
  • Tensorflow version (GPU?): not installed (NA)
  • Flax version (CPU?/GPU?/TPU?): not installed (NA)
  • Jax version: not installed
  • JaxLib version: not installed
  • Using GPU in script?: <fill in>
  • Using distributed or parallel set-up in script?: <fill in>

Who can help?

Hi, @ydshieh after I run run_clip.py, how do I load the fine-tuned model and do inference? My inference code is as follows:


import requests
from PIL import Image
from transformers import AutoModel, AutoProcessor

model = AutoModel.from_pretrained("clip-roberta-finetuned")
processor = AutoProcessor.from_pretrained("clip-roberta-finetuned")

url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)

inputs = processor(text=["a photo of a cat", "a photo of a dog"], images=image, return_tensors="pt", padding=True)

outputs = model(**inputs)
logits_per_image = outputs.logits_per_image  # this is the image-text similarity score
probs = logits_per_image.softmax(dim=1)

print("auto model probs:", probs)

The following error occurred:

D:\software\anaconda\envs\transformers\python.exe D:/NLU/transformers/examples/pytorch/contrastive-image-text/predict.py
Traceback (most recent call last):
  File "D:\software\anaconda\envs\transformers\lib\site-packages\transformers\feature_extraction_utils.py", line 402, in get_feature_extractor_dict
    resolved_feature_extractor_file = cached_path(
  File "D:\software\anaconda\envs\transformers\lib\site-packages\transformers\utils\hub.py", line 300, in cached_path
    raise EnvironmentError(f"file {url_or_filename} not found")
OSError: file clip-roberta-finetuned\preprocessor_config.json not found

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "D:\NLU\transformers\examples\pytorch\contrastive-image-text\predict.py", line 6, in <module>
    processor = AutoProcessor.from_pretrained("clip-roberta-finetuned")
  File "D:\software\anaconda\envs\transformers\lib\site-packages\transformers\models\auto\processing_auto.py", line 249, in from_pretrained
    return PROCESSOR_MAPPING[type(config)].from_pretrained(pretrained_model_name_or_path, **kwargs)
  File "D:\software\anaconda\envs\transformers\lib\site-packages\transformers\processing_utils.py", line 182, in from_pretrained
    args = cls._get_arguments_from_pretrained(pretrained_model_name_or_path, **kwargs)
  File "D:\software\anaconda\envs\transformers\lib\site-packages\transformers\processing_utils.py", line 226, in _get_arguments_from_pretrained
    args.append(attribute_class.from_pretrained(pretrained_model_name_or_path, **kwargs))
  File "D:\software\anaconda\envs\transformers\lib\site-packages\transformers\models\auto\feature_extraction_auto.py", line 289, in from_pretrained
    config_dict, _ = FeatureExtractionMixin.get_feature_extractor_dict(pretrained_model_name_or_path, **kwargs)
  File "D:\software\anaconda\envs\transformers\lib\site-packages\transformers\feature_extraction_utils.py", line 443, in get_feature_extractor_dict
    raise EnvironmentError(
OSError: Can't load feature extractor for 'clip-roberta-finetuned'. If you were trying to load it from 'https://huggingface.co/models', make sure you don't have a local directory with the same name. Otherwise, make sure 'clip-roberta-finetuned' is the correct path to a directory containing a preprocessor_config.json file

Process finished with exit code 1

Information

  • The official example scripts
  • My own modified scripts

Tasks

  • An officially supported task in the examples folder (such as GLUE/SQuAD, …)
  • My own task or dataset (give details below)

Reproduction

OSError: file clip-roberta-finetuned\preprocessor_config.json not found

Expected behavior

load and inference success

Issue Analytics

  • State:closed
  • Created a year ago
  • Comments:5

github_iconTop GitHub Comments

1reaction
gongshaojie12commented, Aug 12, 2022

Hi, @ydshieh thanks. It works fine.

0reactions
ydshiehcommented, Aug 12, 2022

@gongshaojie12

Could you check if you have copied all these files from clip-roberta to clip-roberta-finetuned:

config.json
merges.txt
preprocessor_config.json
special_tokens_map.json
tokenizer.json
tokenizer_config.json
vocab.json

I don’t have any issue when running AutoProcessor.from_pretrained("clip-roberta-finetuned") when I copied all fiiles (of course, ignore the non-finetuned model file)

Read more comments on GitHub >

github_iconTop Results From Across the Web

how to save and load fine-tuned model? · Issue #7849 - GitHub
This will save the model, with its weights and configuration, to the directory you specify. Next, you can load it back using model...
Read more >
Load fine tuned model from local - Hugging Face Forums
Hey, if I fine tune a BERT model is the tokneizer somehow affected? If I save my finetuned model like: bert_model.save_pretrained('.
Read more >
How to load a fine tuned pytorch huggingface bert model from ...
While doing inference, you can just give path to this model (you may have to upload it) and start with inference. To load...
Read more >
BERT — REST Inference from the fine-tuned model - Medium
In the following example, I will present one method for doing this. I will use binary classification for the purpose of simplicity, but...
Read more >
How to Fine-tune Stable Diffusion using Textual Inversion
On 22 Aug 2022, Stability.AI announced the public release of Stable Diffusion, a powerful latent text-to-image diffusion model. The model is capable of ......
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found