question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

Memory leak during prediction with 20 images

See original GitHub issue

I wanted to make batching possible, so I wrote the following code, but it used 10.7 GB RAM and it chrashed. I really don’t know why. For one image it uses 1.4 GB RAM. What could be done to fix this issue? I am doing prediction on CPU.

`im = cv2.imread(“./input.jpg”) images = [im for _ in range(20)] images = [{ “image”: torch.from_numpy(np.transpose(image, (2, 0, 1))) } for image in images]

cfg = get_cfg() cfg.merge_from_file(‘…/detectron2/configs/COCO-Detection/faster_rcnn_R_101_FPN_3x.yaml’) cfg.MODEL.ROI_HEADS.SCORE_THRESH_TEST = 0.5 # set threshold for this model #cfg.MODEL.WEIGHTS = “./model_final.pth” #detectron2://COCO-Detection/faster_rcnn_R_101_FPN_3x/137851257/model_final_f6e8b1.pkl" cfg.MODEL.DEVICE = ‘cpu’

model = build_model(cfg) DetectionCheckpointer(model).load(‘./model_final.pth’) model.train(False)

print(model(images))`

Issue Analytics

  • State:closed
  • Created 3 years ago
  • Comments:10

github_iconTop GitHub Comments

4reactions
memicalemcommented, Jun 9, 2020

export LRU_CACHE_CAPACITY=1 worked for me to reduce memory significantly, but I really don’t know why.

1reaction
dhananjayd232commented, Jun 18, 2020

export LRU_CACHE_CAPACITY=1 worked for me to reduce memory significantly, but I really don’t know why.

Thanks @memicalem , The same thing worked for me as well.

Read more comments on GitHub >

github_iconTop Results From Across the Web

Huge memory leakage issue with tf.keras.models.predict()
MACSTUDIO-2022: First prediction takes around 150MB and subsequent calls ~70-80MB. After say 10000 such calls o predict(), while my MBP memory usage stays...
Read more >
Tensorflow model prediction resulting in memory leak (Out of ...
I am trying to run predictions for a image classification model on large(800000) number of image files. Below is a code snippet of...
Read more >
Running out of GPU memory with just 3 samples of ...
Before the first onBatchEnd is called, I'm getting a High memory usage in GPU, most likely due to a memory leak warning, but...
Read more >
Memory leak when running cpu inference - Gluon
I'm running into a memory leak when performing inference on an mxnet model (i.e. converting an image buffer to tensor and running one...
Read more >
predicting many images ends up OOM error of GPU.
Have you checked the TF Tensor count to ensure if the tensors in memory is the cause or the image data itself?.
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found