Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Memory Issues #93

Open
7thstorm opened this issue Feb 20, 2021 · 6 comments
Open

Memory Issues #93

7thstorm opened this issue Feb 20, 2021 · 6 comments

Comments

@7thstorm
Copy link

System Specs:

  • Ubuntu 18.04
  • Proc: xeon 2.7 Ghz
  • Ram: 48 GB
  • GPU: Quadro P3200 6 GB
  • Cuda 11

Deepstack process does not release memory after processing.

Memory usage upon starting Deepstack in docker (927mb)
image

memory @ 3.2 GB after registering 27 faces
image

This high memory blockage prevents me from performing other functions including registering faces
image

Either I'm missing something or memory management need some serious improvement. The only way around this is to keep on stopping the docker container after each action
Thoughts?

@wills106
Copy link

wills106 commented May 8, 2021

I have the same problem with my Jetson Nano / GTX 1650
I haven't reached the limit yet but I am guessing I will only be able to train about 30ish faces per person before I run out of memory.
The GTX1650 only has 4GB of RAM and I don't have access to an Nvidia card with more.

I have maxed out at 3869MB so far on the GTX. So between teaching people I have to restart the container.
Once it's fully loaded it floats around 1181 - 1185MB

@7thstorm
Copy link
Author

is there any update on this?

@wills106
Copy link

Just wondering if there has been any progress on the memory issue / memory leak where it doesn't release memory after training in faces?

@manalishi70
Copy link

NOPE afaik
image
deepstack cpu on xeonE6-1650v2 3.5GHz
48 images, 79% of 5GB memory

@BeanBagKing
Copy link

I think I'm having the same on the Windows/GPU version. I can't tell how much memory is being used per process, either due to a limitation of nvidia-smi on Windows, my card (P400) or some combination. However, there is a definite correlation between GPU memory, and DeepStack entering an unrecoverable (without a restart) state where all requests result in a 100/Timeout error.

Without better logs (#142) I can't be sure where exactly what is going on, but there's a strong correlation.

I've been tracking troubleshooting and the steps I've taken here: https://ipcamtalk.com/threads/deepstack-gpu-memory-issue-error-100-timeout-after-several-hours.60827/

@bogdanr
Copy link

bogdanr commented Feb 27, 2022

I trained 4 face images and it's using more than half of the memory on jetson nano. If I train two more images it runs out of memory.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants