Memory leak during prediction with 20 images
See original GitHub issueI wanted to make batching possible, so I wrote the following code, but it used 10.7 GB RAM and it chrashed. I really don’t know why. For one image it uses 1.4 GB RAM. What could be done to fix this issue? I am doing prediction on CPU.
`im = cv2.imread(“./input.jpg”) images = [im for _ in range(20)] images = [{ “image”: torch.from_numpy(np.transpose(image, (2, 0, 1))) } for image in images]
cfg = get_cfg() cfg.merge_from_file(‘…/detectron2/configs/COCO-Detection/faster_rcnn_R_101_FPN_3x.yaml’) cfg.MODEL.ROI_HEADS.SCORE_THRESH_TEST = 0.5 # set threshold for this model #cfg.MODEL.WEIGHTS = “./model_final.pth” #detectron2://COCO-Detection/faster_rcnn_R_101_FPN_3x/137851257/model_final_f6e8b1.pkl" cfg.MODEL.DEVICE = ‘cpu’
model = build_model(cfg) DetectionCheckpointer(model).load(‘./model_final.pth’) model.train(False)
print(model(images))`
Issue Analytics
- State:
- Created 3 years ago
- Comments:10
export LRU_CACHE_CAPACITY=1 worked for me to reduce memory significantly, but I really don’t know why.
Thanks @memicalem , The same thing worked for me as well.