Getting OutOfMemoryError: CUDA out of memory. I have limited GPU resources I am currently testing graph based RAG solution on google colab tesla T4 GPU. #17686
Replies: 1 comment 1 reply
-
🤖 Hey @KaifAhmad1, great to see you diving into another challenging adventure with LangChain! How's everything going on your end? Based on the error message you provided, it seems that the GPU memory is being exhausted during the execution of your code. This is a common issue when working with large models or large amounts of data on GPUs with limited memory. From the context provided, there are a couple of similar issues that have been resolved in the LangChain repository:
import torch
torch.cuda.empty_cache() You can try these solutions to see if they help resolve your issue. However, please note that the If these solutions do not resolve your issue, it would be helpful to have more information about the specific code you're running and the exact error message. This will allow us to provide a more specific solution. Sources
|
Beta Was this translation helpful? Give feedback.
-
Checked other resources
Commit to Help
Example Code
Description
System Info
cuda = 12.1
pip = 24.0
python = 3.10.10
Beta Was this translation helpful? Give feedback.
All reactions