question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

@huihuifan

OSError: Model file not found: C

Command:

python generate.py data-bin/writingPrompts --path C:/Users/Richard/Documents/story_creation/fairseq-master/fairseq/models/pretrained_checkpoint.pt --batch-size 32 --beam 1 --sampling --sampling-topk 10 --sampling-temperature 0.8 --nbest 1 --model-overrides "{'pretrained_checkpoint':'C:/Users/Richard/Documents/story_creation/fairseq-master/fairseq/models/fusion_checkpoint.pt'}"

I am trying to use the pre-trained models, which I have saved to this directory: C:/Users/Richard/Documents/story_creation/fairseq-master/fairseq/models/fusion_checkpoint.pt

Full command/error:

PS C:\Users\Richard\Documents\story_creation\fairseq-master> python generate.py data-bin/writingPrompts --path C:/Users/Richard/Documents/story_creation/fairseq-master/fairseq/mod
els/pretrained_checkpoint.pt --batch-size 32 --beam 1 --sampling --sampling-topk 10 --sampling-temperature 0.8 --nbest 1 --model-overrides "{'pretrained_checkpoint':'C:/Users/Rich
ard/Documents/story_creation/fairseq-master/fairseq/models/'}"
Namespace(beam=1, cpu=False, data='data-bin/writingPrompts', fp16=False, gen_subset='test', left_pad_source='True', left_pad_target='False', lenpen=1, log_format=None, log_interval=1000, max_len_a=0, max_len_b=200, max_sentences=32, max_source_positions=1024, max_target_positions=1024, max_tokens=None, min_len=1, model_overrides="{'pretrained_checkpoint':'C:/Users/Richard/Documents/story_creation/fairseq-master/fairseq/models/'}", nbest=1, no_beamable_mm=False, no_early_stop=False, no_progress_bar=False, num_shards=1, path='C:/Users/Richard/Documents/story_creation/fairseq-master/fairseq/models/pretrained_checkpoint.pt', prefix_size=0, print_alignment=False, quiet=False, raw_text=False, remove_bpe=None, replace_unk=None, sampling=True, sampling_temperature=0.8, sampling_topk=10, score_reference=False, seed=1, shard_id=0, skip_invalid_size_inputs_valid_test=False, source_lang=None, target_lang=None, task='translation', unkpen=0, unnormalized=False)
| [wp_source] dictionary: 19025 types
| [wp_target] dictionary: 112832 types
| data-bin/writingPrompts test 15138 examples
| data-bin/writingPrompts test 15138 examples
| loading model(s) from C:/Users/Richard/Documents/story_creation/fairseq-master/fairseq/models/pretrained_checkpoint.pt
Traceback (most recent call last):
  File "generate.py", line 164, in <module>
    main(args)
  File "generate.py", line 41, in main
    models, _ = utils.load_ensemble_for_inference(args.path.split(':'), task, model_arg_overrides=eval(args.model_overrides))
  File "C:\Users\Richard\Documents\story_creation\fairseq-master\fairseq\utils.py", line 145, in load_ensemble_for_inference
    raise IOError('Model file not found: {}'.format(filename))
OSError: Model file not found: C

I have had issues training thats why I thought I should just generate from pretrained models, do I maybe need to get training to work to be able to use the pretrained models?

Thanks!

Issue Analytics

  • State:closed
  • Created 5 years ago
  • Comments:10 (8 by maintainers)

github_iconTop GitHub Comments

1reaction
huihuifancommented, Sep 7, 2018

@myleott @elfsmelf @vanhuyz I pushed a change to the readme to clarify how the data should be preprocessed.

The issue is that the data you download is the full dataset, but in the paper I only model the first 1k tokens of each story (as the full stories are quite long). I wrote this in the readme, but it’s not in the copy pastable code part. I added to the readme some example python code to construct this, basically open the file in python and do some i[0:1000]. If I do this from scratch (mkdir test_data, cd test_data, then following the readme I do these steps: download data, preprocess the data, run preprocess.py), I can reproduce the correct vocabulary size.

| [wp_source] Dictionary: 19024 types | [wp_target] Dictionary: 104959 types

@myleott, I don’t see the tokenization issue you mention.

0reactions
myleottcommented, Sep 8, 2018

Closing. @vanhuyz, please feel free to reopen if you run into any issues

Read more comments on GitHub >

github_iconTop Results From Across the Web

Built-in Exceptions — Python 3.11.1 documentation
The tuple of arguments given to the exception constructor. Some built-in exceptions (like OSError ) expect a certain number of arguments and assign...
Read more >
Handling OSError exception in Python - GeeksforGeeks
Let us see how to handle OSError Exceptions in Python. OSError is a built-in exception in Python and serves as the error class...
Read more >
What is the os.error in Python? - Educative.io
The os.error in Python is the error class for all I/O errors and is an alias of the OSError exception. All the methods...
Read more >
The OSError in Python - Coding Ninjas CodeStudio
OSError is a built-in exception that serves as an error class for the functions present in the os module. It is raised when...
Read more >
Python OSError - Linux Hint
The OS module's error class and a built-in exception in Python are called OSError. When a system failure causes an error, it is...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found