Module Not found: datasets_modules.datasets.output
See original GitHub issueEnvironment info
transformers
version: 4.5.0.dev0- Platform: Linux-3.10.0-1160.15.2.el7.x86_64-x86_64-with-glibc2.10
- Python version: 3.8.5
- PyTorch version (GPU?): 1.8.1 (False)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: not sure
- Using distributed or parallel set-up in script?: <fill in> ?
Who can help
Information
Model I am using (Bert, XLNet …): BART seq2seq
The problem arises when using:
- the official example scripts: (give details below)
- my own modified scripts: (give details below)
The tasks I am working on is:
- an official GLUE/SQUaD task: (give the name)
- my own task or dataset: (give details below)
To reproduce
Steps to reproduce the behavior:
- “Install from source” method
- ran this command, where “data/output.jsonl” is my dataset:
python examples/seq2seq/run_translation.py \
--model_name_or_path t5-small \
--do_train \
--do_eval \
--source_lang en \
--target_lang de \
--source_prefix "Translate English to Logical Forms: " \
--dataset_name data/output.jsonl \
--output_dir /tmp/tst-translation \
--per_device_train_batch_size=4 \
--per_device_eval_batch_size=4 \
--overwrite_output_dir \
--predict_with_generate
- Got the following error:
ModuleNotFoundError: No module named 'datasets_modules.datasets.output'
At first it told me that “datasets” was not installed, so I did pip install datasets
and that worked fine. Then I got this error and haven’t been able to figure out what it means or how to fix it.
Issue Analytics
- State:
- Created 2 years ago
- Comments:5
Top Results From Across the Web
ImportError: No module named datasets - Stack Overflow
But i am getting error. from datasets import dataset_utils ImportError: No module named datasets. I found this solution How can jupyter access a ......
Read more >Modulenotfounderror: no module named datasets ( Solved )
The root cause for the no module named datasets error is that you have not properly installed it in your system. And also...
Read more >Source code for datasets.load - Hugging Face
For datasets on the Hugging Face Hub (list all available datasets and ids with ``datasets.list_datasets()``) - if ``path`` is a canonical dataset or...
Read more >tf.data: Build TensorFlow input pipelines
Dataset abstraction that represents a sequence of elements, in which each ... please make sure the missing libraries mentioned above are installed properly....
Read more >SD (scientific dataset) API (pyhdf.SD) - GitHub Pages
A module of the pyhdf package implementing the SD (scientific dataset) API of the NCSA HDF4 library. Introduction¶. SD is one of the...
Read more >Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start FreeTop Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Top GitHub Comments
Yeah! So in my command script that I posted here, I used
--dataset_name data/output.jsonl \
but that command is for pre-loaded data sets, whereas mine is a custom one. So instead you’ll want to use:@ashleylew Hi I’m getting the same error, can you share the data loading script? I tried doing stuff but it doesn’t work for me. Thanks