Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

When predicting over and over, the RAM gets crank up because something is not released #9

Open
NBCS1 opened this issue Sep 11, 2021 · 0 comments

Comments

@NBCS1
Copy link

NBCS1 commented Sep 11, 2021

Hi,
When predicting over and over, the RAM gets crank up because something is not released and eventually crash my computer or the script.
I tried: to delete some of my objects but it's not them, run a gc.collect at the end of the loop, delete the variable containing the predict_img_with_smooth_windowing() value and I also tried a time.sleep (never know...).
Here is a piece of code importing an image and predicting over a few iterations with the same model and weights.
Any idea what or why?

Note: This code has no purpose but to demonstrate the problem. The problem appeared when predicting segmentations of the same image importing a list of weights for the same model.

Otherwise, super great piece of code, thanks a lot!!!
EDIT: After some thinking, this might be just related to the poor tensorflow memory management and memory leaks here and there.
Just ran this on CPU instead of GPU and it's not doing it so sorry, please close or delete this issue. It's only tensorflow related, fixed by updating tensorflow_gpu, CUDA and CUDnn.

import cv2
import tensorflow
from keras_unet.models import satellite_unet
import numpy as np
import os
from skimage.io import imread
import time
import gc

imagepath="image.tif"
lene=imread(imagepath)
pixels=lene[7]
model = satellite_unet(input_shape=(256, 256, 1))
model.load_weights("model_weights_87.h5")
del lene
i=1
while i<10:
    im=np.expand_dims(pixels,2)
    im=im/255
    predictions_smooth = predict_img_with_smooth_windowing(
        input_img=im,
        window_size=256,
        subdivisions=2,  # Minimal amount of overlap for windowing. Must be an even number.
        nb_classes=1,
        pred_func=(
            lambda img_batch_subdiv: model.predict((img_batch_subdiv), verbose=1,batch_size=1)
        )
    )
    meg = (predictions_smooth > 0.01).astype(np.uint8)
    cv2.imwrite(str(i)+".tif",meg)
    del predictions_smooth#does not change anything 
    time.sleep(0.1)#does not change anything 
    gc.collect()#does not change anything 
    i=i+1
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Development

No branches or pull requests

1 participant