Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

BAM and PAM training on LEVIR-CD Dataset Memory Consumption Issue

See original GitHub issue

Hello, I’ve read all issues before ask your advice in resolution this issue with me.

Firsltly thanks for sharing this work with people, I’ve learned many thinks from this project and continue learning.

I’ve downloaded pretraining weight of PAM from link attached on readme file for this project. Then run with my sample satellite images (1.5m in pixel). Result is good, but not excellent. Possible you trained your model with 0.5m in pixel, so firstly I’ve started to test train methods on LEVIR-CD Dataset with future planning with my own dataset. Base Method Learning done without problem. But BAM and PAM ask big memory(256GB).

So when I run code: python3 ./ --save_epoch_freq 1 --angle 15 --dataroot ../DATASET/train --val_dataroot ../DATASET/val --name LEVIR-CDFA_BAM2 --lr 0.001 --model CDFA --SA_mode BAM --batch_size 8 --load_size 256 --crop_size 256 --preprocess rotate_and_crop


As you see, I’ve 16GB Memory (Hosting provider AWS - with NVIDIA Tesla T4 VGA. So in another issues I’ve learn that, I can pass --ds 4 parameter - self attention module down-sample rate. It consumes 5GB memory, but as was expected demo results is not accurate. Your Pretrained model is better than my with --ds 4 parameter.

So my question is, how can I optimize training without downsample rate and work around memory consumption issue. As I’ve read from your paper

We tested our methods on a desktop PC equipped with an Intel i7-7700K CPU and an NVIDIA GTX 1080Ti graphic card. We used GPU to accelerate the training and testing process.

Possible you have some advises, to reach same model by quality as You shared via link? Possible PC params or Method parameters or input images parameters.

Thank you in advance

Issue Analytics

  • State:closed
  • Created 3 years ago
  • Comments:6

github_iconTop GitHub Comments

tan90du-sxcommented, Mar 19, 2021


johncse1commented, May 24, 2021

Dear @loviji
I am getting the same IndexError. I too cropped the dataset by making each image of size 256x256 How can I make label maps bit depth to be equal to 8. Help me out.

Read more comments on GitHub >

github_iconTop Results From Across the Web

Training models when data doesn't fit in memory
Let us solve the fraud problem in a naive way and track memory usage. The first thing we usually do is to read...
Read more >
Looking for Change? Roll the Dice and Demand Attention
Recently, there has been an effort to reduce the memory footprint of the attention ... The authors introduced the LEVIRCD change detection dataset...
Read more >
Looking for change? Roll the Dice and demand Attention - arXiv
spatial-temporal attention module (PAM). The authors intro- duced the LEVIRCD change detection dataset and demonstrated excellent performance. Their training ...
Read more >
Feature Decomposition-Optimization-Reorganization Network ...
The publicly available building dataset LEVIR-CD is employed to evaluate the change detection performance of our network.
Read more >
Why does memory usage increase as a Keras neural network ...
I have 32 GB of RAM and am training a large dataset using a Keras ... For the first few steps of the...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Post

No results found

github_iconTop Related Hashnode Post

No results found