-
Notifications
You must be signed in to change notification settings - Fork 8
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
speeding up/parallelizing make_chains.py with Slurm: NOTE: Error submitting process 'execute_jobs (##)' for execution -- Execution is retried #58
Comments
@kirilenkobm Could you pls have a look if the --cluster_parameters is a retired parameter? |
Here is the top part of the
|
@MichaelHiller am I understanding the documentation correctly?
The Nextflow Slurm page says:
I know FWIW I do not see |
I'm trying to help a researcher speed up
make_chains.py
results for a mammal. Using this closed issue regarding parallelization, as inspiration, we'd like to speed up the process via our Slurm cluster running RHEL 8. I tried requesting a node via an interactive srun session and starting with 16 CPU with--ntasks
and-c
. Using--executor local
as suggested in the closed thread was painfully slow. The user there mention--cluster_parameters
but that results in:make_chains.py: error: unrecognized arguments: --cluster_parameters cpus=16
May I request assistance here to get the correct syntax?
P.S.. I can confirm the suggested shabang fix in this thread also works to start the sample jobs.
The text was updated successfully, but these errors were encountered: