Huge amount of CPU RAM needed during training
See original GitHub issueHello team!
With the current version of fairseq we noticed that a huge amount of RAM (CPU RAM, not GPU RAM) is required in order to run the training. Moreover this is correlated to the number of GPU used on the same machine.
So my guess is that the binarized data used for the training is completely loaded in RAM for every GPU process, this will result in having the amount of CPU RAM is roughly:
RAM ~= (number of GPUs) * sizeof(binarized data)
.
If this is true, the amount of RAM needed for medium/large training sets is huge (hundreds of GB) wrt size of training set (less than 100 GB).
If this is the case, why can’t we use a memory mapped training set? So that the amount of RAM depends exclusively on sizeof(binarized data)
?
I’m available to work on this if needed, can you please give me some code context, or a good starting point to begin with?
Issue Analytics
- State:
- Created 5 years ago
- Comments:22 (22 by maintainers)
Top GitHub Comments
Just to provide another workaround for the RAM issue. I wrapped the mmap dataset with a customized dataset and got the following error
It is probably caused by some unpickleable data used in my customized dataset class, but I don’t have enough time to look into it, so I tried the
dataset-impl=lazy
option (with data placed on a SATA SSD), it worked but the speed decreased by 30%. I can clearly see GPUs are not fully saturated from thenvidia-smi
command. Then I created a ramfs mount with this link (https://unix.stackexchange.com/questions/66329/creating-a-ram-disk-on-linux), and voila! it runs smoothly. The speed only decreased by about 7%, which is fairly acceptable to me.Hi @myleott yes! I spotted the problem with my previous implementation: I was creating all the tensors at startup, so the problem was the overhead - even with mmap data, the single tensor was requiring too much memory.
This new version creates tensor lazily over a unique mmaped memoryview. I was initially scared about the time “overhead” but surprisingly I measure the same exact wps (word-per-second) of the regular cached version, that’s great!
I have also run some measures of RAM usage, here’s my results:
MODEL SIZE TEST Here I have used a tiny training set, so the RAM usage is due only to the network model itself; this is basically the base RAM consumption independent from training set size.
So we can say that, for a transformer base model, fairseq requires ~1920 MB per GPU
CACHED DATASET This is the same base transformer model training but with a 12.6M lines cached dataset.
By removing the model overhead:
This is the size in RAM of the dataset. Here I see something I did not expect actually. The memory consumption is less than linear with number of GPU, this is not expected. Because every GPU process re-load the dataset entirely, I expected a linear dependency, Maybe some problems in measurements?
MMAP DATASET This is the same base transformer model training but with a 12.6M lines memory-mapped dataset.
By removing the model overhead (we have a 2.9GB of buff/cached memory mapped dataset):
So here we see some savings. I’m not sure why I still have a ~493Mb per GPU (all dataset is memory mapped, so it should not appear in resident memory). I think this is still some model-dependant data structure.