-
Notifications
You must be signed in to change notification settings - Fork 67
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
unable to custom DB #255
Comments
This is typically due to low memory (RAM) available. Probably you need to ask slurm for more memory.. See the table here for typical memory usage of kaiju-makedb for various database sizes: https://github.com/bioinformatics-centre/kaiju#creating-the-reference-database-and-index |
ok, i will try it, thank you very much! Is there any way to estimate the necessary RAM from the size of faa file? for example, the size of uniref90.fsa is 40Go. have a nice day. |
Hm that's hard to say.. Maybe set the number of sequences in your fasta file in relation to the number of seqs in the nr database from the table and get a memory estimate from that.. |
ok, thank you ! |
Hello,
thank you for creating this amazing tool!
I'm trying to custom a uniref90 DB from the uniref90.fsa file. I have made the file has a format like
and then I run the command with my protein faa file:
But each time the process will be killed and I can't find the reason, here is the msg on terminal:
I have tested it with a smaller size file which has only 50000 lines of my faa file. It can run successfully until the end.
I can not find out where is the problem, can anyone help me pls?
Thanks,
juejun
The text was updated successfully, but these errors were encountered: