Memory Problem?
See original GitHub issueHi, I clone your code and run train it on WMT English-German task, but it failed with "RuntimeError: cuda runtime error (2) : out of memory at /opt/conda/conda-bld/pytorch_1502009910772/work/torch/lib/THC/generic/THCStorage.cu:66".
I run it on a Tesla K40 which has the same memory capacity of 12GB as your Titan X, and with the default settings.
So I don`t know why this happens, do you have any idea? Thanks
Issue Analytics
- State:
- Created 6 years ago
- Reactions:7
- Comments:12 (3 by maintainers)
Top Results From Across the Web
Memory loss: When to seek help - Mayo Clinic
Many medical problems can cause memory loss or other dementia-like symptoms. Most of these conditions can be treated. Your doctor can screen you...
Read more >Forgetfulness — 7 types of normal memory problems
Forgetfulness — 7 types of normal memory problems · 1. Transience. This is the tendency to forget facts or events over time. ·...
Read more >Memory Loss (Short- and Long-Term): Causes and Treatments
While such sudden, profound loss of memory is rare, memory loss is a problem that affects most people, to a degree.
Read more >Six Tips for Talking to Someone You Think Has A Memory ...
Six Tips for Talking to Someone You Think Has A Memory Problem It may be challenging to communicate with an older adult who...
Read more >Aging and Memory, Memory Impairment and Memory Care
This session will explore the normal changes in memory that are associated with age, the abnormal changes that interfere with functioning, ...
Read more >
Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free
Top Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
A fix that I used was to decrease batch_size in the parser arguments thereby decreasing memory requirements. I have it working on a GTX 1070 with batch_size = 32.
@buptwhr I’m not sure since I haven’t tried this project. Maybe you can use another project in the following link, which I have tested on my GPU, and there is no memory error. Hope this can help you. https://github.com/DaoD/annotated-transformer