You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I was working with sentences that are of length 200 approx and total sentences was 1000. But when I feed that to the model, my computer as well as google colab ran out of memory. Also, when I tried it with loop i.e feeding one by one, the process becomes too slow i.e for 2 hours of running the code it only found the embeddings of 40-50 sentences only. So, Is there any way to speed up the process?
The text was updated successfully, but these errors were encountered:
I was working with sentences that are of length 200 approx and total sentences was 1000. But when I feed that to the model, my computer as well as google colab ran out of memory. Also, when I tried it with loop i.e feeding one by one, the process becomes too slow i.e for 2 hours of running the code it only found the embeddings of 40-50 sentences only. So, Is there any way to speed up the process?
The text was updated successfully, but these errors were encountered: