question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

Evaluation Results are not Consistent in Consecutive Evaluations & Sensitivty to Batch Size

See original GitHub issue

Thank you for sharing this wonderful work! Could you help to look into the following two issues:

  1. I tested the code on the HMDB51 dataset, the results can be inconsistent for two consecutive evaluations (run the LINE414 of main_videp.py test_stats = evaluate(data_loader_val, model, device) twice).

  2. For fine-tuning with Swin Transformer, I ran the code with a smaller batch size (i.e., 32) on 4 3090 GPUs several times, and the results for tunning the linear layer are around 71+%. Is the large batch size making such a difference from the reported results of 74%?

Thank you very much in advance!

Issue Analytics

  • State:closed
  • Created a year ago
  • Comments:6 (3 by maintainers)

github_iconTop GitHub Comments

1reaction
ShoufaChencommented, Sep 19, 2022

I see. For the evaluation between two training epochs, random selection of video frames exists:(https://github.com/ShoufaChen/AdaptFormer/blob/6967d676c1a5e5a11be2e2768a6e5c604bb043ed/datasets/kinetics.py#L286

The final results are obtained from https://github.com/ShoufaChen/AdaptFormer/blob/main/main_video.py#L436.

0reactions
bruceyocommented, Sep 19, 2022

I see. For the evaluation between two training epochs, random selection of video frames exists:(

https://github.com/ShoufaChen/AdaptFormer/blob/6967d676c1a5e5a11be2e2768a6e5c604bb043ed/datasets/kinetics.py#L286

The final results are obtained from https://github.com/ShoufaChen/AdaptFormer/blob/main/main_video.py#L436.

Thank you:)

Read more comments on GitHub >

github_iconTop Results From Across the Web

Increasing batch size through instance repetition improves ...
We analyze the effect of batch augmentation on gradient variance and show that it empirically improves convergence for a wide variety of networks...
Read more >
Effect of batch size on training dynamics | by Kevin Shen
It has been empirically observed that smaller batch sizes not only has faster training dynamics but also generalization to the test dataset ...
Read more >
Clinical evaluation for batch consistency of an inactivated ...
The results indicated that immunogenicity of EV71 vaccines could be varied from batch to batch, but the overall immunogenicity is not different. Safety...
Read more >
Understand the Impact of Learning Rate on Neural Network ...
Choosing the learning rate is challenging as a value too small may result in a long training process that could get stuck, whereas...
Read more >
arXiv:2107.05855v1 [cs.LG] 13 Jul 2021
it was not evaluated on large batch sizes. ... Results regarding the sensitivity of AutoWU with respect to ρw and the.
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found