How to use a custom dataset?
See original GitHub issueI’ve changed the default_config.py to a custom folder with images: folder/path |----/image001.jpg |----/image002.jpg …
But it returned me
ValueError: num_samples should be a positive integer value, but got num_samples=0
Issue Analytics
- State:
- Created 3 years ago
- Comments:11 (3 by maintainers)
Top Results From Across the Web
Writing Custom Datasets, DataLoaders and Transforms
In this tutorial, we will see how to load and preprocess/augment data from a non trivial dataset. To run this tutorial, please make...
Read more >Custom dataset in Pytorch —Part 1. Images | by Utkarsh Garg
So, we'll be learning about how to use it in our custom dataset pipeline. You can install it using : pip install -U...
Read more >Use Custom Datasets — detectron2 0.6 documentation
Use Custom Datasets ¶ · Register your dataset (i.e., tell detectron2 how to obtain your dataset). · Optionally, register metadata for your dataset....
Read more >Writing custom datasets - TensorFlow
To use the new dataset with tfds.load('my_dataset') : tfds.load will automatically detect and load the dataset generated in ~/tensorflow_datasets/my_dataset/ ...
Read more >Creating a custom Dataset and Dataloader in Pytorch - Medium
The dataset class of torch can be used like any other class in python, and have any number of sub functions in it,...
Read more >
Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free
Top Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found

I get this error. But I try to make “val” folder for “validation” folder. The error disappears. I get new error : “out of memory” although I try to make “batch_size = 2” and “crop_size = 64”. Could you post your default_config.py if you can run train.py.
Yes, writing own dataloader solve this issue.