-
Notifications
You must be signed in to change notification settings - Fork 263
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Tips for training MTL on large dataset #43
Comments
How large are your train/dev/test datasets (in terms of size). The architecture loads the complete datasets into memory. If they are too large, your machine will crash. You then need to change the code so that the data is streamed from disk and not read into memory. If your datasets are small (say, smaller than 10 GB), the issue is somewhere else. |
The dataset is small, less than 3MB per task. I have seen the training failing due to memory limit for any model that has more than 1 million trainable parameters. The training goes smoothly for models that have less than 1 million trainable parameters. |
That is strange. How many tasks are you training? It should be no issue to train with more than 1 million parameters, even with much smaller memory. I personally have about 16 GB of RAM and training runs smoothly on larger networks with datasets. Are you using Python 3.6 (or newer) and a recent Linux system? |
Yes, I am using Python 3.6 on CentOS version 7. I am having this issue even in two tasks. |
I sadly don't have an idea why this could be the case. It should work fine. You could also test this implementation: It works similar to this repository, but it also allows to use ELMo representation. Maybe there this issue does not happen? |
Still the same issue even with the elmo implementation. Here is the error: |
Is python actually allocating that much memory? Maybe the OS imposes some limit on the memory / heap / stack size, so that the scripts crashes even if only e.g. 4 GB RAM are allocated. Maybe this thread helps: |
Are there tips on how to train MLT model on large datasets that have millions of trainable parameters. I am trying to train this on 1TB memory of machine but still facing memory limit.
Thanks.
The text was updated successfully, but these errors were encountered: