question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

train slowonly_omnisource_pretrained_r101_8x8x1_20e_ava_rgb on AVA2.1

See original GitHub issue

Thanks for your error report and we appreciate it a lot. If you feel we have help you, give us a STAR! 😆

Checklist

Describe the bug

I processed the AVA using the processing script of the dataset, and as a result, no relevant pictures were found during the training process. After my inspection, I found that the frame rate of most of the videos in the data set is not 30FPS, but the data is processed according to 30FPS, resulting in less than 27000 pictures generated per video, so no relevant pictures will be found. How to solve this problem please?

Traceback (most recent call last): File “/data1/Hatcher/model/Action_Recognition/mmaction2/tools/train.py”, line 224, in <module> main() File “/data1/Hatcher/model/Action_Recognition/mmaction2/tools/train.py”, line 212, in main center_index: 16801,FPS:30 train_model( File “/data1/Hatcher/model/Action_Recognition/mmaction2/mmaction/apis/train.py”, line 232, in train_model timestamp:1460,timestamp_start:900,fps:30 start:16769,end:16833,center_index:16801 runner.run(data_loaders, cfg.workflow, cfg.total_epochs, **runner_kwargs) File “/data1/Hatcher/anaconda3/envs/open-mmlab3/lib/python3.8/site-packages/mmcv/runner/epoch_based_runner.py”, line 127, in run i:0,frame_idx:16768 epoch_runner(data_loaders[i], **kwargs) File “/data1/Hatcher/anaconda3/envs/open-mmlab3/lib/python3.8/site-packages/mmcv/runner/epoch_based_runner.py”, line 47, in train for i, data_batch in enumerate(self.data_loader): File “/data1/Hatcher/anaconda3/envs/open-mmlab3/lib/python3.8/site-packages/torch/utils/data/dataloader.py”, line 517, in next data = self._next_data() File “/data1/Hatcher/anaconda3/envs/open-mmlab3/lib/python3.8/site-packages/torch/utils/data/dataloader.py”, line 1199, in _next_data return self._process_data(data) File “/data1/Hatcher/anaconda3/envs/open-mmlab3/lib/python3.8/site-packages/torch/utils/data/dataloader.py”, line 1225, in _process_data data.reraise() File “/data1/Hatcher/anaconda3/envs/open-mmlab3/lib/python3.8/site-packages/torch/_utils.py”, line 429, in reraise raise self.exc_type(msg) FileNotFoundError: Caught FileNotFoundError in DataLoader worker process 0. Original Traceback (most recent call last): File “/data1/Hatcher/anaconda3/envs/open-mmlab3/lib/python3.8/site-packages/torch/utils/data/_utils/worker.py”, line 202, in _worker_loop data = fetcher.fetch(index) File “/data1/Hatcher/anaconda3/envs/open-mmlab3/lib/python3.8/site-packages/torch/utils/data/_utils/fetch.py”, line 44, in fetch data = [self.dataset[idx] for idx in possibly_batched_index] File “/data1/Hatcher/anaconda3/envs/open-mmlab3/lib/python3.8/site-packages/torch/utils/data/_utils/fetch.py”, line 44, in <listcomp> data = [self.dataset[idx] for idx in possibly_batched_index] File “/data1/Hatcher/model/Action_Recognition/mmaction2/mmaction/datasets/base.py”, line 289, in getitem return self.prepare_train_frames(idx) File “/data1/Hatcher/model/Action_Recognition/mmaction2/mmaction/datasets/ava_dataset.py”, line 312, in prepare_train_frames return self.pipeline(results) File “/data1/Hatcher/model/Action_Recognition/mmaction2/mmaction/datasets/pipelines/compose.py”, line 50, in call data = t(data) File “/data1/Hatcher/model/Action_Recognition/mmaction2/mmaction/datasets/pipelines/loading.py”, line 1307, in call img_bytes = self.file_client.get(filepath) File “/data1/Hatcher/anaconda3/envs/open-mmlab3/lib/python3.8/site-packages/mmcv/fileio/file_client.py”, line 992, in get return self.client.get(filepath) File “/data1/Hatcher/anaconda3/envs/open-mmlab3/lib/python3.8/site-packages/mmcv/fileio/file_client.py”, line 517, in get with open(filepath, ‘rb’) as f: FileNotFoundError: [Errno 2] No such file or directory: ‘/data1/Hatcher/model/Action_Recognition/mmaction2/data/ava/rawframes/7g37N3eoQ9s/img_24539.jpg’

Reproduction

  1. What command or script did you run? python tools/train.py configs/detection/ava/slowonly_omnisource_pretrained_r101_8x8x1_20e_ava_rgb.py
A placeholder for the command.

python tools/train.py configs/detection/ava/slowonly_omnisource_pretrained_r101_8x8x1_20e_ava_rgb.py 2. Did you make any modifications on the code or config? Did you understand what you have modified? 3. no 4. What dataset did you use? AVA 2.1 Environment

  1. Please run PYTHONPATH=${PWD}:$PYTHONPATH python mmaction/utils/collect_env.py to collect necessary environment information and paste it here.
  2. You may add addition that may be helpful for locating the problem, such as
    • How you installed PyTorch [e.g., pip, conda, source]
    • Other environment variables that may be related (such as $PATH, $LD_LIBRARY_PATH, $PYTHONPATH, etc.)

Error traceback

If applicable, paste the error traceback here.

A placeholder for traceback.

Bug fix

If you have already identified the reason, you can provide the information here. If you are willing to create a PR to fix it, please also leave a comment here and that would be much appreciated!

Issue Analytics

  • State:closed
  • Created a year ago
  • Comments:5

github_iconTop GitHub Comments

github_iconTop Results From Across the Web

mmaction2/README.md at master - AVA - GitHub
The key characteristics of our dataset are: (1) the definition of atomic ... AVA2.2 ... Example: train SlowOnly model on AVA with periodic...
Read more >
Spatio Temporal Action Detection Models
Here, we go one step further and model spatio-temporal relations to capture ... AVA2.1 ... Example: train ACRN with SlowFast backbone on AVA...
Read more >
AVA: A Video Dataset of Atomic Visual ... - Google Research
The AVA dataset densely annotates 80 atomic visual actions in 430 15-minute movie clips, where actions are localized in space and time, resulting...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found