You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi, thank you for developing this useful tool.
I have so many genomes (> 100k) along with their gbk files, and I want to annotate them via Phold. Would a large --batch_size (e.g. 128) help process files faster? Since in the documentation, you have mentioned that a batch size of 1 is usually faster!
And, in general, should I combine all of my gbk files into a single one as input, or can I give different gbk files in parallel to the phold predict?
bw
The text was updated successfully, but these errors were encountered:
Hi, thank you for developing this useful tool.
I have so many genomes (> 100k) along with their
gbk
files, and I want to annotate them via Phold. Would a large--batch_size
(e.g. 128) help process files faster? Since in the documentation, you have mentioned that a batch size of 1 is usually faster!And, in general, should I combine all of my
gbk
files into a single one as input, or can I give differentgbk
files in parallel to thephold predict
?bw
The text was updated successfully, but these errors were encountered: