OOM error in CUDA #574
Unanswered
ShamshadAhmedShorthillsAI
asked this question in
Q&A
Replies: 1 comment 2 replies
-
Same data? Does it oom each run? Pytorch also sometimes has a tiny gpu leak of ~1GB, so stopping and restarting LLM Studio might reset that. |
Beta Was this translation helpful? Give feedback.
2 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
I have created an experiment with nvidia gpu machine with 24 GB of memory and the experiment ran successfully. Now when I am trying to run that same experiment(and keeping every config same as previous experiment). I am getting OOM. I am using mistral-7b-v0.1 base model. Any solution for that?
Beta Was this translation helpful? Give feedback.
All reactions