You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi,
When predicting over and over, the RAM gets crank up because something is not released and eventually crash my computer or the script.
I tried: to delete some of my objects but it's not them, run a gc.collect at the end of the loop, delete the variable containing the predict_img_with_smooth_windowing() value and I also tried a time.sleep (never know...).
Here is a piece of code importing an image and predicting over a few iterations with the same model and weights.
Any idea what or why?
Note: This code has no purpose but to demonstrate the problem. The problem appeared when predicting segmentations of the same image importing a list of weights for the same model.
Otherwise, super great piece of code, thanks a lot!!!
EDIT: After some thinking, this might be just related to the poor tensorflow memory management and memory leaks here and there.
Just ran this on CPU instead of GPU and it's not doing it so sorry, please close or delete this issue. It's only tensorflow related, fixed by updating tensorflow_gpu, CUDA and CUDnn.
import cv2
import tensorflow
from keras_unet.models import satellite_unet
import numpy as np
import os
from skimage.io import imread
import time
import gc
imagepath="image.tif"
lene=imread(imagepath)
pixels=lene[7]
model = satellite_unet(input_shape=(256, 256, 1))
model.load_weights("model_weights_87.h5")
del lene
i=1
while i<10:
im=np.expand_dims(pixels,2)
im=im/255
predictions_smooth = predict_img_with_smooth_windowing(
input_img=im,
window_size=256,
subdivisions=2, # Minimal amount of overlap for windowing. Must be an even number.
nb_classes=1,
pred_func=(
lambda img_batch_subdiv: model.predict((img_batch_subdiv), verbose=1,batch_size=1)
)
)
meg = (predictions_smooth > 0.01).astype(np.uint8)
cv2.imwrite(str(i)+".tif",meg)
del predictions_smooth#does not change anything
time.sleep(0.1)#does not change anything
gc.collect()#does not change anything
i=i+1
The text was updated successfully, but these errors were encountered:
Hi,
When predicting over and over, the RAM gets crank up because something is not released and eventually crash my computer or the script.
I tried: to delete some of my objects but it's not them, run a gc.collect at the end of the loop, delete the variable containing the predict_img_with_smooth_windowing() value and I also tried a time.sleep (never know...).
Here is a piece of code importing an image and predicting over a few iterations with the same model and weights.
Any idea what or why?
Note: This code has no purpose but to demonstrate the problem. The problem appeared when predicting segmentations of the same image importing a list of weights for the same model.
Otherwise, super great piece of code, thanks a lot!!!
EDIT: After some thinking, this might be just related to the poor tensorflow memory management and memory leaks here and there.
Just ran this on CPU instead of GPU and it's not doing it so sorry, please close or delete this issue. It's only tensorflow related, fixed by updating tensorflow_gpu, CUDA and CUDnn.
The text was updated successfully, but these errors were encountered: