Training step is too slow
See original GitHub issueHi,
Thank you for your code.
As I go deeply into this code, I found the training step is particular slow. The problem here (I guess) is the dataset construction processing, where too much functions (e.g., padding sequences, getting history) are implemented in the __get_item__
.
I wonder, have you tried to wrap these functions in the __init__
function? This might lead to more memory consuming but will absolutely accelerate the training process.
Thanks.
Issue Analytics
- State:
- Created 4 years ago
- Comments:6 (4 by maintainers)
Top Results From Across the Web
Training speed is too slow #959 - facebookresearch/fairscale
Hello, I am conducting multi-GPU training using fairseq's ShardedDDP and OSS. It is impressive that all gpus consume almost the same memory.
Read more >Slow model training is very frustrating - Fast.ai forums
The study of one epoch takes more than 50 minutes. This is very frustrating and does not encourage experimentation with model training. Do...
Read more >Optimizer.step() is very slow - autograd - PyTorch Forums
I am training a Densely Connected U-Net model on CT scan data of dimension 512x512 for segmentation task. My network training was very...
Read more >Why is it so slow? | Apple Developer Forums
I run the following code using tensorflow-macos and tensorflow_macos, respectively. import tensorflow as tf mnist = tf.keras.datasets.mnist (x_train, y_train), ...
Read more >Training Become very slow Yolov4 - TAO Toolkit
Hi,. I am trying to train yolov4 using custom dataset in which i have only one class. but training process running very slow...
Read more >
Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free
Top Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Hi @abhshkdz @kdexd ,thanks for your reply. I will try to divert the padding function from
__getitem__
to__init__
, and check the memory consumption. I will get back to you later. Thanks! 😃@shubhamagarwal92 Thanks for the suggestions! Both make sense to me. If you could send in a pull request, that’d be great, thanks!